00:00:00.002 Started by upstream project "autotest-nightly" build number 3918 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3293 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.034 The recommended git tool is: git 00:00:00.034 using credential 00000000-0000-0000-0000-000000000002 00:00:00.036 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.093 Using shallow fetch with depth 1 00:00:00.093 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.093 > git --version # timeout=10 00:00:00.146 > git --version # 'git version 2.39.2' 00:00:00.146 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.004 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.017 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.029 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.029 > git config core.sparsecheckout # timeout=10 00:00:04.039 > git read-tree -mu HEAD # timeout=10 00:00:04.054 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.085 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.086 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.163 [Pipeline] Start of Pipeline 00:00:04.179 [Pipeline] library 00:00:04.180 Loading library shm_lib@master 00:00:04.180 Library shm_lib@master is cached. Copying from home. 00:00:04.203 [Pipeline] node 00:00:04.213 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:04.215 [Pipeline] { 00:00:04.229 [Pipeline] catchError 00:00:04.231 [Pipeline] { 00:00:04.244 [Pipeline] wrap 00:00:04.250 [Pipeline] { 00:00:04.256 [Pipeline] stage 00:00:04.257 [Pipeline] { (Prologue) 00:00:04.497 [Pipeline] sh 00:00:04.778 + logger -p user.info -t JENKINS-CI 00:00:04.792 [Pipeline] echo 00:00:04.793 Node: WFP19 00:00:04.800 [Pipeline] sh 00:00:05.096 [Pipeline] setCustomBuildProperty 00:00:05.107 [Pipeline] echo 00:00:05.108 Cleanup processes 00:00:05.111 [Pipeline] sh 00:00:05.387 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.387 1372750 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.399 [Pipeline] sh 00:00:05.683 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:05.683 ++ grep -v 'sudo pgrep' 00:00:05.683 ++ awk '{print $1}' 00:00:05.683 + sudo kill -9 00:00:05.683 + true 00:00:05.695 [Pipeline] cleanWs 00:00:05.702 [WS-CLEANUP] Deleting project workspace... 00:00:05.703 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.708 [WS-CLEANUP] done 00:00:05.712 [Pipeline] setCustomBuildProperty 00:00:05.723 [Pipeline] sh 00:00:06.002 + sudo git config --global --replace-all safe.directory '*' 00:00:06.082 [Pipeline] httpRequest 00:00:06.107 [Pipeline] echo 00:00:06.108 Sorcerer 10.211.164.101 is alive 00:00:06.114 [Pipeline] httpRequest 00:00:06.118 HttpMethod: GET 00:00:06.118 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.119 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.138 Response Code: HTTP/1.1 200 OK 00:00:06.138 Success: Status code 200 is in the accepted range: 200,404 00:00:06.138 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.596 [Pipeline] sh 00:00:10.877 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.892 [Pipeline] httpRequest 00:00:10.912 [Pipeline] echo 00:00:10.913 Sorcerer 10.211.164.101 is alive 00:00:10.921 [Pipeline] httpRequest 00:00:10.925 HttpMethod: GET 00:00:10.926 URL: http://10.211.164.101/packages/spdk_8ee2672c448e443b939194ef9f5073cbf4b8a2b4.tar.gz 00:00:10.926 Sending request to url: http://10.211.164.101/packages/spdk_8ee2672c448e443b939194ef9f5073cbf4b8a2b4.tar.gz 00:00:10.928 Response Code: HTTP/1.1 200 OK 00:00:10.929 Success: Status code 200 is in the accepted range: 200,404 00:00:10.929 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_8ee2672c448e443b939194ef9f5073cbf4b8a2b4.tar.gz 00:00:33.336 [Pipeline] sh 00:00:33.618 + tar --no-same-owner -xf spdk_8ee2672c448e443b939194ef9f5073cbf4b8a2b4.tar.gz 00:00:36.915 [Pipeline] sh 00:00:37.197 + git -C spdk log --oneline -n5 00:00:37.197 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:37.197 19f5787c8 raid: skip configured base bdevs in sb examine 00:00:37.197 3b9baa5f8 bdev/raid1: Support resize when increasing the size of base bdevs 00:00:37.197 25a9ccb98 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:37.197 38b03952e bdev/compress: check pm path for creating compress bdev 00:00:37.208 [Pipeline] } 00:00:37.224 [Pipeline] // stage 00:00:37.234 [Pipeline] stage 00:00:37.236 [Pipeline] { (Prepare) 00:00:37.253 [Pipeline] writeFile 00:00:37.269 [Pipeline] sh 00:00:37.547 + logger -p user.info -t JENKINS-CI 00:00:37.559 [Pipeline] sh 00:00:37.843 + logger -p user.info -t JENKINS-CI 00:00:37.856 [Pipeline] sh 00:00:38.140 + cat autorun-spdk.conf 00:00:38.140 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.140 SPDK_TEST_BLOCKDEV=1 00:00:38.140 SPDK_TEST_ISAL=1 00:00:38.140 SPDK_TEST_CRYPTO=1 00:00:38.140 SPDK_TEST_REDUCE=1 00:00:38.140 SPDK_TEST_VBDEV_COMPRESS=1 00:00:38.140 SPDK_RUN_ASAN=1 00:00:38.140 SPDK_RUN_UBSAN=1 00:00:38.140 SPDK_TEST_ACCEL=1 00:00:38.149 RUN_NIGHTLY=1 00:00:38.153 [Pipeline] readFile 00:00:38.199 [Pipeline] withEnv 00:00:38.201 [Pipeline] { 00:00:38.215 [Pipeline] sh 00:00:38.498 + set -ex 00:00:38.498 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:38.498 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:38.498 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.498 ++ SPDK_TEST_BLOCKDEV=1 00:00:38.498 ++ SPDK_TEST_ISAL=1 00:00:38.498 ++ SPDK_TEST_CRYPTO=1 00:00:38.498 ++ SPDK_TEST_REDUCE=1 00:00:38.498 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:38.498 ++ SPDK_RUN_ASAN=1 00:00:38.498 ++ SPDK_RUN_UBSAN=1 00:00:38.498 ++ SPDK_TEST_ACCEL=1 00:00:38.498 ++ RUN_NIGHTLY=1 00:00:38.498 + case $SPDK_TEST_NVMF_NICS in 00:00:38.498 + DRIVERS= 00:00:38.498 + [[ -n '' ]] 00:00:38.498 + exit 0 00:00:38.508 [Pipeline] } 00:00:38.528 [Pipeline] // withEnv 00:00:38.533 [Pipeline] } 00:00:38.550 [Pipeline] // stage 00:00:38.559 [Pipeline] catchError 00:00:38.561 [Pipeline] { 00:00:38.578 [Pipeline] timeout 00:00:38.578 Timeout set to expire in 1 hr 0 min 00:00:38.580 [Pipeline] { 00:00:38.595 [Pipeline] stage 00:00:38.597 [Pipeline] { (Tests) 00:00:38.614 [Pipeline] sh 00:00:38.898 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:38.898 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:38.898 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:38.898 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:38.898 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:38.898 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:38.898 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:38.898 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:38.898 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:38.898 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:38.899 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:38.899 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:38.899 + source /etc/os-release 00:00:38.899 ++ NAME='Fedora Linux' 00:00:38.899 ++ VERSION='38 (Cloud Edition)' 00:00:38.899 ++ ID=fedora 00:00:38.899 ++ VERSION_ID=38 00:00:38.899 ++ VERSION_CODENAME= 00:00:38.899 ++ PLATFORM_ID=platform:f38 00:00:38.899 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:38.899 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:38.899 ++ LOGO=fedora-logo-icon 00:00:38.899 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:38.899 ++ HOME_URL=https://fedoraproject.org/ 00:00:38.899 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:38.899 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:38.899 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:38.899 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:38.899 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:38.899 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:38.899 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:38.899 ++ SUPPORT_END=2024-05-14 00:00:38.899 ++ VARIANT='Cloud Edition' 00:00:38.899 ++ VARIANT_ID=cloud 00:00:38.899 + uname -a 00:00:38.899 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:38.899 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:43.132 Hugepages 00:00:43.132 node hugesize free / total 00:00:43.132 node0 1048576kB 0 / 0 00:00:43.132 node0 2048kB 0 / 0 00:00:43.132 node1 1048576kB 0 / 0 00:00:43.132 node1 2048kB 0 / 0 00:00:43.132 00:00:43.132 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:43.132 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:43.132 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:43.132 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:43.132 + rm -f /tmp/spdk-ld-path 00:00:43.132 + source autorun-spdk.conf 00:00:43.132 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.132 ++ SPDK_TEST_BLOCKDEV=1 00:00:43.132 ++ SPDK_TEST_ISAL=1 00:00:43.132 ++ SPDK_TEST_CRYPTO=1 00:00:43.132 ++ SPDK_TEST_REDUCE=1 00:00:43.132 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:43.132 ++ SPDK_RUN_ASAN=1 00:00:43.132 ++ SPDK_RUN_UBSAN=1 00:00:43.132 ++ SPDK_TEST_ACCEL=1 00:00:43.132 ++ RUN_NIGHTLY=1 00:00:43.132 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:43.132 + [[ -n '' ]] 00:00:43.132 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:43.132 + for M in /var/spdk/build-*-manifest.txt 00:00:43.132 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:43.132 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:43.132 + for M in /var/spdk/build-*-manifest.txt 00:00:43.132 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:43.132 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:43.132 ++ uname 00:00:43.132 + [[ Linux == \L\i\n\u\x ]] 00:00:43.132 + sudo dmesg -T 00:00:43.132 + sudo dmesg --clear 00:00:43.132 + dmesg_pid=1373814 00:00:43.132 + [[ Fedora Linux == FreeBSD ]] 00:00:43.132 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:43.132 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:43.132 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:43.132 + [[ -x /usr/src/fio-static/fio ]] 00:00:43.132 + export FIO_BIN=/usr/src/fio-static/fio 00:00:43.132 + FIO_BIN=/usr/src/fio-static/fio 00:00:43.132 + sudo dmesg -Tw 00:00:43.132 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:43.132 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:43.132 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:43.132 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:43.132 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:43.132 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:43.132 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:43.132 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:43.132 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:43.132 Test configuration: 00:00:43.132 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.132 SPDK_TEST_BLOCKDEV=1 00:00:43.132 SPDK_TEST_ISAL=1 00:00:43.132 SPDK_TEST_CRYPTO=1 00:00:43.132 SPDK_TEST_REDUCE=1 00:00:43.132 SPDK_TEST_VBDEV_COMPRESS=1 00:00:43.132 SPDK_RUN_ASAN=1 00:00:43.132 SPDK_RUN_UBSAN=1 00:00:43.132 SPDK_TEST_ACCEL=1 00:00:43.132 RUN_NIGHTLY=1 16:17:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:43.132 16:17:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:43.132 16:17:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:43.132 16:17:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:43.132 16:17:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.132 16:17:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.132 16:17:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.132 16:17:39 -- paths/export.sh@5 -- $ export PATH 00:00:43.132 16:17:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:43.132 16:17:39 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:43.132 16:17:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:43.132 16:17:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721830659.XXXXXX 00:00:43.132 16:17:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721830659.Ey3x7r 00:00:43.133 16:17:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:43.133 16:17:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:43.133 16:17:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:43.133 16:17:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:43.133 16:17:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:43.133 16:17:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:43.133 16:17:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:43.133 16:17:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:43.133 16:17:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:43.133 16:17:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:43.133 16:17:39 -- pm/common@17 -- $ local monitor 00:00:43.133 16:17:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.133 16:17:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.133 16:17:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.133 16:17:39 -- pm/common@21 -- $ date +%s 00:00:43.133 16:17:39 -- pm/common@21 -- $ date +%s 00:00:43.133 16:17:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:43.133 16:17:39 -- pm/common@25 -- $ sleep 1 00:00:43.133 16:17:39 -- pm/common@21 -- $ date +%s 00:00:43.133 16:17:39 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721830659 00:00:43.133 16:17:39 -- pm/common@21 -- $ date +%s 00:00:43.133 16:17:39 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721830659 00:00:43.133 16:17:39 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721830659 00:00:43.133 16:17:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721830659 00:00:43.133 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721830659_collect-vmstat.pm.log 00:00:43.133 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721830659_collect-cpu-load.pm.log 00:00:43.133 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721830659_collect-cpu-temp.pm.log 00:00:43.133 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721830659_collect-bmc-pm.bmc.pm.log 00:00:44.071 16:17:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:44.071 16:17:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:44.071 16:17:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:44.071 16:17:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:44.071 16:17:40 -- spdk/autobuild.sh@16 -- $ date -u 00:00:44.071 Wed Jul 24 02:17:40 PM UTC 2024 00:00:44.071 16:17:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:44.071 v24.09-pre-316-g8ee2672c4 00:00:44.071 16:17:40 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:44.071 16:17:40 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:44.071 16:17:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:44.071 16:17:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:44.071 16:17:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.071 ************************************ 00:00:44.071 START TEST asan 00:00:44.071 ************************************ 00:00:44.071 16:17:40 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:00:44.071 using asan 00:00:44.071 00:00:44.071 real 0m0.001s 00:00:44.071 user 0m0.001s 00:00:44.071 sys 0m0.000s 00:00:44.071 16:17:40 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:44.071 16:17:40 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:44.071 ************************************ 00:00:44.071 END TEST asan 00:00:44.071 ************************************ 00:00:44.071 16:17:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:44.071 16:17:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:44.071 16:17:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:44.071 16:17:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:44.071 16:17:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:44.071 ************************************ 00:00:44.071 START TEST ubsan 00:00:44.071 ************************************ 00:00:44.071 16:17:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:44.071 using ubsan 00:00:44.071 00:00:44.071 real 0m0.000s 00:00:44.071 user 0m0.000s 00:00:44.071 sys 0m0.000s 00:00:44.071 16:17:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:44.071 16:17:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:44.071 ************************************ 00:00:44.071 END TEST ubsan 00:00:44.071 ************************************ 00:00:44.071 16:17:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:44.071 16:17:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:44.071 16:17:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:44.071 16:17:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:44.071 16:17:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:44.071 16:17:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:44.071 16:17:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:44.071 16:17:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:44.071 16:17:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:44.330 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:44.330 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:44.589 Using 'verbs' RDMA provider 00:01:00.847 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.810 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.810 Creating mk/config.mk...done. 00:01:15.810 Creating mk/cc.flags.mk...done. 00:01:15.810 Type 'make' to build. 00:01:15.810 16:18:10 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:15.810 16:18:10 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:15.810 16:18:10 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:15.810 16:18:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.810 ************************************ 00:01:15.810 START TEST make 00:01:15.810 ************************************ 00:01:15.810 16:18:11 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:15.810 make[1]: Nothing to be done for 'all'. 00:01:47.975 The Meson build system 00:01:47.975 Version: 1.3.1 00:01:47.975 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:47.975 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:47.975 Build type: native build 00:01:47.975 Program cat found: YES (/usr/bin/cat) 00:01:47.975 Project name: DPDK 00:01:47.975 Project version: 24.03.0 00:01:47.975 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.975 C linker for the host machine: cc ld.bfd 2.39-16 00:01:47.975 Host machine cpu family: x86_64 00:01:47.975 Host machine cpu: x86_64 00:01:47.975 Message: ## Building in Developer Mode ## 00:01:47.975 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.975 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.975 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.975 Program python3 found: YES (/usr/bin/python3) 00:01:47.975 Program cat found: YES (/usr/bin/cat) 00:01:47.975 Compiler for C supports arguments -march=native: YES 00:01:47.975 Checking for size of "void *" : 8 00:01:47.975 Checking for size of "void *" : 8 (cached) 00:01:47.975 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:47.975 Library m found: YES 00:01:47.975 Library numa found: YES 00:01:47.975 Has header "numaif.h" : YES 00:01:47.975 Library fdt found: NO 00:01:47.975 Library execinfo found: NO 00:01:47.975 Has header "execinfo.h" : YES 00:01:47.975 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.975 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.975 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.975 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.975 Run-time dependency openssl found: YES 3.0.9 00:01:47.975 Run-time dependency libpcap found: YES 1.10.4 00:01:47.975 Has header "pcap.h" with dependency libpcap: YES 00:01:47.975 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.975 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.975 Compiler for C supports arguments -Wformat: YES 00:01:47.975 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.975 Compiler for C supports arguments -Wformat-security: NO 00:01:47.975 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.975 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.975 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.975 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.975 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.975 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.975 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.975 Compiler for C supports arguments -Wundef: YES 00:01:47.976 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.976 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.976 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.976 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.976 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.976 Program objdump found: YES (/usr/bin/objdump) 00:01:47.976 Compiler for C supports arguments -mavx512f: YES 00:01:47.976 Checking if "AVX512 checking" compiles: YES 00:01:47.976 Fetching value of define "__SSE4_2__" : 1 00:01:47.976 Fetching value of define "__AES__" : 1 00:01:47.976 Fetching value of define "__AVX__" : 1 00:01:47.976 Fetching value of define "__AVX2__" : 1 00:01:47.976 Fetching value of define "__AVX512BW__" : 1 00:01:47.976 Fetching value of define "__AVX512CD__" : 1 00:01:47.976 Fetching value of define "__AVX512DQ__" : 1 00:01:47.976 Fetching value of define "__AVX512F__" : 1 00:01:47.976 Fetching value of define "__AVX512VL__" : 1 00:01:47.976 Fetching value of define "__PCLMUL__" : 1 00:01:47.976 Fetching value of define "__RDRND__" : 1 00:01:47.976 Fetching value of define "__RDSEED__" : 1 00:01:47.976 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.976 Fetching value of define "__znver1__" : (undefined) 00:01:47.976 Fetching value of define "__znver2__" : (undefined) 00:01:47.976 Fetching value of define "__znver3__" : (undefined) 00:01:47.976 Fetching value of define "__znver4__" : (undefined) 00:01:47.976 Library asan found: YES 00:01:47.976 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.976 Message: lib/log: Defining dependency "log" 00:01:47.976 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.976 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.976 Library rt found: YES 00:01:47.976 Checking for function "getentropy" : NO 00:01:47.976 Message: lib/eal: Defining dependency "eal" 00:01:47.976 Message: lib/ring: Defining dependency "ring" 00:01:47.976 Message: lib/rcu: Defining dependency "rcu" 00:01:47.976 Message: lib/mempool: Defining dependency "mempool" 00:01:47.976 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.976 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.976 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.976 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.976 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.976 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.976 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.976 Compiler for C supports arguments -mpclmul: YES 00:01:47.976 Compiler for C supports arguments -maes: YES 00:01:47.976 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.976 Compiler for C supports arguments -mavx512bw: YES 00:01:47.976 Compiler for C supports arguments -mavx512dq: YES 00:01:47.976 Compiler for C supports arguments -mavx512vl: YES 00:01:47.976 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.976 Compiler for C supports arguments -mavx2: YES 00:01:47.976 Compiler for C supports arguments -mavx: YES 00:01:47.976 Message: lib/net: Defining dependency "net" 00:01:47.976 Message: lib/meter: Defining dependency "meter" 00:01:47.976 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.976 Message: lib/pci: Defining dependency "pci" 00:01:47.976 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.976 Message: lib/hash: Defining dependency "hash" 00:01:47.976 Message: lib/timer: Defining dependency "timer" 00:01:47.976 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.976 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.976 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.976 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.976 Message: lib/power: Defining dependency "power" 00:01:47.976 Message: lib/reorder: Defining dependency "reorder" 00:01:47.976 Message: lib/security: Defining dependency "security" 00:01:47.976 Has header "linux/userfaultfd.h" : YES 00:01:47.976 Has header "linux/vduse.h" : YES 00:01:47.976 Message: lib/vhost: Defining dependency "vhost" 00:01:47.976 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.976 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:47.976 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.976 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.976 Compiler for C supports arguments -std=c11: YES 00:01:47.976 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:47.976 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:47.976 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:47.976 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:47.976 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:47.976 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:47.976 Library mtcr_ul found: NO 00:01:47.976 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:47.976 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:50.512 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:50.512 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:50.512 Configuring mlx5_autoconf.h using configuration 00:01:50.512 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:50.512 Run-time dependency libcrypto found: YES 3.0.9 00:01:50.512 Library IPSec_MB found: YES 00:01:50.512 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:50.512 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:50.512 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:50.512 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:50.512 Library IPSec_MB found: YES 00:01:50.512 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:50.512 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:50.512 Compiler for C supports arguments -std=c11: YES (cached) 00:01:50.512 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:50.512 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:50.512 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:50.512 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:50.512 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:50.512 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:50.512 Library libisal found: NO 00:01:50.512 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:50.512 Compiler for C supports arguments -std=c11: YES (cached) 00:01:50.512 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:50.512 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:50.512 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:50.512 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:50.512 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:50.512 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:50.512 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:50.512 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:50.512 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:50.512 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:50.512 Program doxygen found: YES (/usr/bin/doxygen) 00:01:50.512 Configuring doxy-api-html.conf using configuration 00:01:50.512 Configuring doxy-api-man.conf using configuration 00:01:50.512 Program mandb found: YES (/usr/bin/mandb) 00:01:50.512 Program sphinx-build found: NO 00:01:50.512 Configuring rte_build_config.h using configuration 00:01:50.512 Message: 00:01:50.513 ================= 00:01:50.513 Applications Enabled 00:01:50.513 ================= 00:01:50.513 00:01:50.513 apps: 00:01:50.513 00:01:50.513 00:01:50.513 Message: 00:01:50.513 ================= 00:01:50.513 Libraries Enabled 00:01:50.513 ================= 00:01:50.513 00:01:50.513 libs: 00:01:50.513 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:50.513 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:50.513 cryptodev, dmadev, power, reorder, security, vhost, 00:01:50.513 00:01:50.513 Message: 00:01:50.513 =============== 00:01:50.513 Drivers Enabled 00:01:50.513 =============== 00:01:50.513 00:01:50.513 common: 00:01:50.513 mlx5, qat, 00:01:50.513 bus: 00:01:50.513 auxiliary, pci, vdev, 00:01:50.513 mempool: 00:01:50.513 ring, 00:01:50.513 dma: 00:01:50.513 00:01:50.513 net: 00:01:50.513 00:01:50.513 crypto: 00:01:50.513 ipsec_mb, mlx5, 00:01:50.513 compress: 00:01:50.513 isal, mlx5, 00:01:50.513 vdpa: 00:01:50.513 00:01:50.513 00:01:50.513 Message: 00:01:50.513 ================= 00:01:50.513 Content Skipped 00:01:50.513 ================= 00:01:50.513 00:01:50.513 apps: 00:01:50.513 dumpcap: explicitly disabled via build config 00:01:50.513 graph: explicitly disabled via build config 00:01:50.513 pdump: explicitly disabled via build config 00:01:50.513 proc-info: explicitly disabled via build config 00:01:50.513 test-acl: explicitly disabled via build config 00:01:50.513 test-bbdev: explicitly disabled via build config 00:01:50.513 test-cmdline: explicitly disabled via build config 00:01:50.513 test-compress-perf: explicitly disabled via build config 00:01:50.513 test-crypto-perf: explicitly disabled via build config 00:01:50.513 test-dma-perf: explicitly disabled via build config 00:01:50.513 test-eventdev: explicitly disabled via build config 00:01:50.513 test-fib: explicitly disabled via build config 00:01:50.513 test-flow-perf: explicitly disabled via build config 00:01:50.513 test-gpudev: explicitly disabled via build config 00:01:50.513 test-mldev: explicitly disabled via build config 00:01:50.513 test-pipeline: explicitly disabled via build config 00:01:50.513 test-pmd: explicitly disabled via build config 00:01:50.513 test-regex: explicitly disabled via build config 00:01:50.513 test-sad: explicitly disabled via build config 00:01:50.513 test-security-perf: explicitly disabled via build config 00:01:50.513 00:01:50.513 libs: 00:01:50.513 argparse: explicitly disabled via build config 00:01:50.513 metrics: explicitly disabled via build config 00:01:50.513 acl: explicitly disabled via build config 00:01:50.513 bbdev: explicitly disabled via build config 00:01:50.513 bitratestats: explicitly disabled via build config 00:01:50.513 bpf: explicitly disabled via build config 00:01:50.513 cfgfile: explicitly disabled via build config 00:01:50.513 distributor: explicitly disabled via build config 00:01:50.513 efd: explicitly disabled via build config 00:01:50.513 eventdev: explicitly disabled via build config 00:01:50.513 dispatcher: explicitly disabled via build config 00:01:50.513 gpudev: explicitly disabled via build config 00:01:50.513 gro: explicitly disabled via build config 00:01:50.513 gso: explicitly disabled via build config 00:01:50.513 ip_frag: explicitly disabled via build config 00:01:50.513 jobstats: explicitly disabled via build config 00:01:50.513 latencystats: explicitly disabled via build config 00:01:50.513 lpm: explicitly disabled via build config 00:01:50.513 member: explicitly disabled via build config 00:01:50.513 pcapng: explicitly disabled via build config 00:01:50.513 rawdev: explicitly disabled via build config 00:01:50.513 regexdev: explicitly disabled via build config 00:01:50.513 mldev: explicitly disabled via build config 00:01:50.513 rib: explicitly disabled via build config 00:01:50.513 sched: explicitly disabled via build config 00:01:50.513 stack: explicitly disabled via build config 00:01:50.513 ipsec: explicitly disabled via build config 00:01:50.513 pdcp: explicitly disabled via build config 00:01:50.513 fib: explicitly disabled via build config 00:01:50.513 port: explicitly disabled via build config 00:01:50.513 pdump: explicitly disabled via build config 00:01:50.513 table: explicitly disabled via build config 00:01:50.513 pipeline: explicitly disabled via build config 00:01:50.513 graph: explicitly disabled via build config 00:01:50.513 node: explicitly disabled via build config 00:01:50.513 00:01:50.513 drivers: 00:01:50.513 common/cpt: not in enabled drivers build config 00:01:50.513 common/dpaax: not in enabled drivers build config 00:01:50.513 common/iavf: not in enabled drivers build config 00:01:50.513 common/idpf: not in enabled drivers build config 00:01:50.513 common/ionic: not in enabled drivers build config 00:01:50.513 common/mvep: not in enabled drivers build config 00:01:50.513 common/octeontx: not in enabled drivers build config 00:01:50.513 bus/cdx: not in enabled drivers build config 00:01:50.513 bus/dpaa: not in enabled drivers build config 00:01:50.513 bus/fslmc: not in enabled drivers build config 00:01:50.513 bus/ifpga: not in enabled drivers build config 00:01:50.513 bus/platform: not in enabled drivers build config 00:01:50.513 bus/uacce: not in enabled drivers build config 00:01:50.513 bus/vmbus: not in enabled drivers build config 00:01:50.513 common/cnxk: not in enabled drivers build config 00:01:50.513 common/nfp: not in enabled drivers build config 00:01:50.513 common/nitrox: not in enabled drivers build config 00:01:50.513 common/sfc_efx: not in enabled drivers build config 00:01:50.513 mempool/bucket: not in enabled drivers build config 00:01:50.513 mempool/cnxk: not in enabled drivers build config 00:01:50.513 mempool/dpaa: not in enabled drivers build config 00:01:50.513 mempool/dpaa2: not in enabled drivers build config 00:01:50.513 mempool/octeontx: not in enabled drivers build config 00:01:50.513 mempool/stack: not in enabled drivers build config 00:01:50.513 dma/cnxk: not in enabled drivers build config 00:01:50.513 dma/dpaa: not in enabled drivers build config 00:01:50.513 dma/dpaa2: not in enabled drivers build config 00:01:50.513 dma/hisilicon: not in enabled drivers build config 00:01:50.513 dma/idxd: not in enabled drivers build config 00:01:50.513 dma/ioat: not in enabled drivers build config 00:01:50.513 dma/skeleton: not in enabled drivers build config 00:01:50.513 net/af_packet: not in enabled drivers build config 00:01:50.513 net/af_xdp: not in enabled drivers build config 00:01:50.513 net/ark: not in enabled drivers build config 00:01:50.513 net/atlantic: not in enabled drivers build config 00:01:50.513 net/avp: not in enabled drivers build config 00:01:50.513 net/axgbe: not in enabled drivers build config 00:01:50.513 net/bnx2x: not in enabled drivers build config 00:01:50.513 net/bnxt: not in enabled drivers build config 00:01:50.513 net/bonding: not in enabled drivers build config 00:01:50.513 net/cnxk: not in enabled drivers build config 00:01:50.513 net/cpfl: not in enabled drivers build config 00:01:50.513 net/cxgbe: not in enabled drivers build config 00:01:50.513 net/dpaa: not in enabled drivers build config 00:01:50.513 net/dpaa2: not in enabled drivers build config 00:01:50.513 net/e1000: not in enabled drivers build config 00:01:50.513 net/ena: not in enabled drivers build config 00:01:50.513 net/enetc: not in enabled drivers build config 00:01:50.513 net/enetfec: not in enabled drivers build config 00:01:50.513 net/enic: not in enabled drivers build config 00:01:50.513 net/failsafe: not in enabled drivers build config 00:01:50.513 net/fm10k: not in enabled drivers build config 00:01:50.513 net/gve: not in enabled drivers build config 00:01:50.513 net/hinic: not in enabled drivers build config 00:01:50.513 net/hns3: not in enabled drivers build config 00:01:50.513 net/i40e: not in enabled drivers build config 00:01:50.513 net/iavf: not in enabled drivers build config 00:01:50.513 net/ice: not in enabled drivers build config 00:01:50.513 net/idpf: not in enabled drivers build config 00:01:50.513 net/igc: not in enabled drivers build config 00:01:50.513 net/ionic: not in enabled drivers build config 00:01:50.513 net/ipn3ke: not in enabled drivers build config 00:01:50.513 net/ixgbe: not in enabled drivers build config 00:01:50.513 net/mana: not in enabled drivers build config 00:01:50.513 net/memif: not in enabled drivers build config 00:01:50.513 net/mlx4: not in enabled drivers build config 00:01:50.513 net/mlx5: not in enabled drivers build config 00:01:50.513 net/mvneta: not in enabled drivers build config 00:01:50.513 net/mvpp2: not in enabled drivers build config 00:01:50.513 net/netvsc: not in enabled drivers build config 00:01:50.513 net/nfb: not in enabled drivers build config 00:01:50.513 net/nfp: not in enabled drivers build config 00:01:50.513 net/ngbe: not in enabled drivers build config 00:01:50.513 net/null: not in enabled drivers build config 00:01:50.513 net/octeontx: not in enabled drivers build config 00:01:50.513 net/octeon_ep: not in enabled drivers build config 00:01:50.513 net/pcap: not in enabled drivers build config 00:01:50.513 net/pfe: not in enabled drivers build config 00:01:50.513 net/qede: not in enabled drivers build config 00:01:50.513 net/ring: not in enabled drivers build config 00:01:50.513 net/sfc: not in enabled drivers build config 00:01:50.513 net/softnic: not in enabled drivers build config 00:01:50.513 net/tap: not in enabled drivers build config 00:01:50.513 net/thunderx: not in enabled drivers build config 00:01:50.513 net/txgbe: not in enabled drivers build config 00:01:50.513 net/vdev_netvsc: not in enabled drivers build config 00:01:50.513 net/vhost: not in enabled drivers build config 00:01:50.513 net/virtio: not in enabled drivers build config 00:01:50.513 net/vmxnet3: not in enabled drivers build config 00:01:50.513 raw/*: missing internal dependency, "rawdev" 00:01:50.513 crypto/armv8: not in enabled drivers build config 00:01:50.513 crypto/bcmfs: not in enabled drivers build config 00:01:50.513 crypto/caam_jr: not in enabled drivers build config 00:01:50.514 crypto/ccp: not in enabled drivers build config 00:01:50.514 crypto/cnxk: not in enabled drivers build config 00:01:50.514 crypto/dpaa_sec: not in enabled drivers build config 00:01:50.514 crypto/dpaa2_sec: not in enabled drivers build config 00:01:50.514 crypto/mvsam: not in enabled drivers build config 00:01:50.514 crypto/nitrox: not in enabled drivers build config 00:01:50.514 crypto/null: not in enabled drivers build config 00:01:50.514 crypto/octeontx: not in enabled drivers build config 00:01:50.514 crypto/openssl: not in enabled drivers build config 00:01:50.514 crypto/scheduler: not in enabled drivers build config 00:01:50.514 crypto/uadk: not in enabled drivers build config 00:01:50.514 crypto/virtio: not in enabled drivers build config 00:01:50.514 compress/nitrox: not in enabled drivers build config 00:01:50.514 compress/octeontx: not in enabled drivers build config 00:01:50.514 compress/zlib: not in enabled drivers build config 00:01:50.514 regex/*: missing internal dependency, "regexdev" 00:01:50.514 ml/*: missing internal dependency, "mldev" 00:01:50.514 vdpa/ifc: not in enabled drivers build config 00:01:50.514 vdpa/mlx5: not in enabled drivers build config 00:01:50.514 vdpa/nfp: not in enabled drivers build config 00:01:50.514 vdpa/sfc: not in enabled drivers build config 00:01:50.514 event/*: missing internal dependency, "eventdev" 00:01:50.514 baseband/*: missing internal dependency, "bbdev" 00:01:50.514 gpu/*: missing internal dependency, "gpudev" 00:01:50.514 00:01:50.514 00:01:50.780 Build targets in project: 115 00:01:50.780 00:01:50.780 DPDK 24.03.0 00:01:50.780 00:01:50.780 User defined options 00:01:50.780 buildtype : debug 00:01:50.780 default_library : shared 00:01:50.780 libdir : lib 00:01:50.780 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:50.780 b_sanitize : address 00:01:50.780 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:50.780 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:50.780 cpu_instruction_set: native 00:01:50.780 disable_apps : test-eventdev,test-acl,test-fib,test-cmdline,test-pipeline,test-dma-perf,graph,test-compress-perf,test-bbdev,test,proc-info,dumpcap,test-gpudev,test-regex,test-security-perf,test-mldev,test-sad,pdump,test-crypto-perf,test-pmd,test-flow-perf 00:01:50.780 disable_libs : ip_frag,metrics,fib,pipeline,regexdev,pdcp,rawdev,sched,argparse,bitratestats,jobstats,efd,graph,pcapng,latencystats,lpm,port,table,ipsec,bbdev,dispatcher,rib,gpudev,member,node,distributor,mldev,stack,acl,gro,pdump,eventdev,bpf,gso,cfgfile 00:01:50.780 enable_docs : false 00:01:50.780 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:50.780 enable_kmods : false 00:01:50.780 max_lcores : 128 00:01:50.780 tests : false 00:01:50.780 00:01:50.780 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.044 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.313 [1/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.313 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.313 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.313 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.313 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.313 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.313 [7/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.313 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.313 [9/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.313 [10/378] Linking static target lib/librte_kvargs.a 00:01:51.313 [11/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.583 [12/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.583 [13/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.583 [14/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.583 [15/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.583 [16/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.583 [17/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.583 [18/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.583 [19/378] Linking static target lib/librte_log.a 00:01:51.583 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.583 [21/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.583 [22/378] Linking static target lib/librte_pci.a 00:01:51.584 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.584 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.584 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.584 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.584 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.584 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.584 [29/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.584 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.584 [31/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:51.843 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.843 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.843 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.843 [35/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.843 [36/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.843 [37/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.843 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.843 [39/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.843 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.843 [41/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.843 [42/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:52.110 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:52.110 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:52.110 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:52.110 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:52.110 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:52.110 [48/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:52.110 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:52.110 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:52.110 [51/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:52.110 [52/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:52.110 [53/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:52.110 [54/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:52.110 [55/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:52.110 [56/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:52.110 [57/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:52.110 [58/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:52.110 [59/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:52.110 [60/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:52.110 [61/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:52.110 [62/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:52.110 [63/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:52.110 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:52.110 [65/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:52.110 [66/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:52.110 [67/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:52.110 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:52.110 [69/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:52.110 [70/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:52.110 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:52.110 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:52.110 [73/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:52.110 [74/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:52.110 [75/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:52.110 [76/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:52.110 [77/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:52.110 [78/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:52.110 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:52.110 [80/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:52.110 [81/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.110 [82/378] Linking static target lib/librte_meter.a 00:01:52.110 [83/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:52.110 [84/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:52.110 [85/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:52.110 [86/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:52.110 [87/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:52.110 [88/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.110 [89/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:52.110 [90/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:52.110 [91/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:52.110 [92/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:52.110 [93/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:52.110 [94/378] Linking static target lib/librte_ring.a 00:01:52.110 [95/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:52.110 [96/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.110 [97/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:52.110 [98/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:52.110 [99/378] Linking static target lib/librte_telemetry.a 00:01:52.110 [100/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:52.110 [101/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.110 [102/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.110 [103/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:52.110 [104/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:52.110 [105/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:52.110 [106/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:52.371 [107/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.371 [108/378] Linking static target lib/librte_timer.a 00:01:52.371 [109/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:52.371 [110/378] Linking static target lib/librte_cmdline.a 00:01:52.371 [111/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:52.371 [112/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:52.371 [113/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:52.371 [114/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:52.371 [115/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:52.371 [116/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:52.371 [117/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.371 [118/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:52.371 [119/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:52.371 [120/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.371 [121/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:52.371 [122/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:52.371 [123/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.371 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.371 [125/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:52.371 [126/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.371 [127/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.371 [128/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.371 [129/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:52.371 [130/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:52.371 [131/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:52.371 [132/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.371 [133/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.371 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:52.371 [135/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.632 [136/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.632 [137/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.632 [138/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.632 [139/378] Linking static target lib/librte_mempool.a 00:01:52.632 [140/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.632 [141/378] Linking static target lib/librte_dmadev.a 00:01:52.632 [142/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.632 [143/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:52.632 [144/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.632 [145/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.632 [146/378] Linking static target lib/librte_net.a 00:01:52.632 [147/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.632 [148/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:52.632 [149/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.632 [150/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.632 [151/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.632 [152/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:52.632 [153/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.632 [154/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.632 [155/378] Linking static target lib/librte_rcu.a 00:01:52.632 [156/378] Linking target lib/librte_log.so.24.1 00:01:52.632 [157/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.632 [158/378] Linking static target lib/librte_compressdev.a 00:01:52.632 [159/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.890 [160/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:52.890 [161/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:52.890 [162/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.890 [163/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:52.890 [164/378] Linking static target lib/librte_eal.a 00:01:52.890 [165/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.890 [166/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.890 [167/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.890 [168/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.890 [169/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:52.890 [170/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:52.890 [171/378] Linking static target lib/librte_power.a 00:01:52.890 [172/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:52.890 [173/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.890 [174/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.891 [175/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.891 [176/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.891 [177/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.891 [178/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.891 [179/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.891 [180/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.891 [181/378] Linking target lib/librte_kvargs.so.24.1 00:01:52.891 [182/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:52.891 [183/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:52.891 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:52.891 [185/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:52.891 [186/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:52.891 [187/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:52.891 [188/378] Linking target lib/librte_telemetry.so.24.1 00:01:52.891 [189/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.891 [190/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:53.150 [191/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:53.150 [192/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:53.150 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:53.150 [194/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:53.150 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:53.150 [196/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:53.150 [197/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:53.150 [198/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:53.150 [199/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:53.150 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:53.150 [201/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:53.150 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:53.150 [203/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:53.150 [204/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:53.150 [205/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:53.150 [206/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.150 [207/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:53.150 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:53.150 [209/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:53.150 [210/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:53.150 [211/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:53.150 [212/378] Linking static target lib/librte_reorder.a 00:01:53.150 [213/378] Linking static target lib/librte_security.a 00:01:53.150 [214/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:53.150 [215/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:53.150 [216/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:53.150 [217/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:53.150 [218/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:53.150 [219/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:53.150 [220/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:53.150 [221/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:53.150 [222/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:53.150 [223/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:53.150 [224/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:53.150 [225/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:53.150 [226/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:53.150 [227/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:53.150 [228/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:53.150 [229/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.150 [230/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:53.150 [231/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:53.150 [232/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.150 [233/378] Linking static target drivers/librte_bus_vdev.a 00:01:53.150 [234/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:53.150 [235/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:53.150 [236/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:53.150 [237/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:53.150 [238/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:53.150 [239/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:53.409 [240/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:53.409 [241/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:53.409 [242/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:53.409 [243/378] Linking static target lib/librte_mbuf.a 00:01:53.409 [244/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:53.409 [245/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:53.409 [246/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:53.409 [247/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:53.409 [248/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.409 [249/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:53.409 [250/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:53.409 [251/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:53.409 [252/378] Linking static target drivers/librte_bus_pci.a 00:01:53.409 [253/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.409 [254/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.409 [255/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:53.409 [256/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.409 [257/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.409 [258/378] Linking static target drivers/librte_mempool_ring.a 00:01:53.409 [259/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:53.409 [260/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:53.409 [261/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.409 [262/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:53.409 [263/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:53.409 [264/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:53.668 [265/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.668 [266/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:53.668 [267/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:53.668 [268/378] Linking static target lib/librte_cryptodev.a 00:01:53.668 [269/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:53.668 [270/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.668 [271/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:53.668 [272/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:53.668 [273/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:53.668 [274/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.668 [275/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:53.668 [276/378] Linking static target drivers/librte_compress_mlx5.a 00:01:53.668 [277/378] Linking static target lib/librte_hash.a 00:01:53.668 [278/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.668 [279/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:53.668 [280/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:53.668 [281/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:53.668 [282/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:53.668 [283/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:53.668 [284/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.668 [285/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:53.668 [286/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:53.668 [287/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:53.668 [288/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:53.927 [289/378] Linking static target drivers/librte_compress_isal.a 00:01:53.927 [290/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.927 [291/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.927 [292/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:53.927 [293/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:53.927 [294/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:53.927 [295/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:54.185 [296/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:54.185 [297/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:54.185 [298/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:54.185 [299/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:54.185 [300/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:54.185 [301/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.185 [302/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.185 [303/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:54.444 [304/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:54.444 [305/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:54.444 [306/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:54.444 [307/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:54.444 [308/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:54.444 [309/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:54.444 [310/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:54.444 [311/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:54.444 [312/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:54.444 [313/378] Linking static target drivers/librte_common_mlx5.a 00:01:54.702 [314/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.702 [315/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:54.702 [316/378] Linking static target lib/librte_ethdev.a 00:01:55.636 [317/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.894 [318/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.298 [319/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:57.298 [320/378] Linking static target drivers/libtmp_rte_common_qat.a 00:01:57.556 [321/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:57.556 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:57.556 [323/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:57.556 [324/378] Linking static target drivers/librte_common_qat.a 00:01:59.454 [325/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.454 [326/378] Linking static target lib/librte_vhost.a 00:02:00.387 [327/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.760 [328/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.290 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.666 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.925 [331/378] Linking target lib/librte_eal.so.24.1 00:02:05.925 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:05.925 [333/378] Linking target lib/librte_dmadev.so.24.1 00:02:05.925 [334/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:05.925 [335/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:05.925 [336/378] Linking target lib/librte_ring.so.24.1 00:02:05.925 [337/378] Linking target lib/librte_meter.so.24.1 00:02:05.925 [338/378] Linking target lib/librte_pci.so.24.1 00:02:05.925 [339/378] Linking target lib/librte_timer.so.24.1 00:02:06.184 [340/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.184 [341/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.184 [342/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:06.184 [343/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.184 [344/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.184 [345/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:06.184 [346/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.184 [347/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.184 [348/378] Linking target lib/librte_rcu.so.24.1 00:02:06.184 [349/378] Linking target lib/librte_mempool.so.24.1 00:02:06.442 [350/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:06.442 [351/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.442 [352/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.442 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:06.442 [354/378] Linking target lib/librte_mbuf.so.24.1 00:02:06.701 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:06.701 [356/378] Linking target lib/librte_net.so.24.1 00:02:06.701 [357/378] Linking target lib/librte_reorder.so.24.1 00:02:06.701 [358/378] Linking target lib/librte_compressdev.so.24.1 00:02:06.701 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:02:06.959 [360/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:06.959 [361/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:06.959 [362/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:06.959 [363/378] Linking target lib/librte_hash.so.24.1 00:02:06.959 [364/378] Linking target lib/librte_cmdline.so.24.1 00:02:06.959 [365/378] Linking target lib/librte_security.so.24.1 00:02:06.959 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:06.959 [367/378] Linking target lib/librte_ethdev.so.24.1 00:02:06.959 [368/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:06.959 [369/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:07.217 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.217 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:07.217 [372/378] Linking target lib/librte_power.so.24.1 00:02:07.217 [373/378] Linking target lib/librte_vhost.so.24.1 00:02:07.217 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:07.475 [375/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:07.475 [376/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:07.475 [377/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:07.475 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:02:07.475 INFO: autodetecting backend as ninja 00:02:07.475 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:08.850 CC lib/ut/ut.o 00:02:08.850 CC lib/ut_mock/mock.o 00:02:08.850 CC lib/log/log.o 00:02:08.850 CC lib/log/log_flags.o 00:02:08.850 CC lib/log/log_deprecated.o 00:02:08.850 LIB libspdk_ut.a 00:02:08.850 LIB libspdk_ut_mock.a 00:02:08.850 LIB libspdk_log.a 00:02:08.850 SO libspdk_ut.so.2.0 00:02:08.850 SO libspdk_ut_mock.so.6.0 00:02:09.108 SO libspdk_log.so.7.0 00:02:09.108 SYMLINK libspdk_ut.so 00:02:09.108 SYMLINK libspdk_ut_mock.so 00:02:09.108 SYMLINK libspdk_log.so 00:02:09.366 CXX lib/trace_parser/trace.o 00:02:09.366 CC lib/ioat/ioat.o 00:02:09.366 CC lib/dma/dma.o 00:02:09.366 CC lib/util/base64.o 00:02:09.366 CC lib/util/crc16.o 00:02:09.366 CC lib/util/bit_array.o 00:02:09.366 CC lib/util/cpuset.o 00:02:09.366 CC lib/util/crc32_ieee.o 00:02:09.366 CC lib/util/crc32.o 00:02:09.366 CC lib/util/crc32c.o 00:02:09.366 CC lib/util/crc64.o 00:02:09.366 CC lib/util/dif.o 00:02:09.366 CC lib/util/fd.o 00:02:09.366 CC lib/util/fd_group.o 00:02:09.366 CC lib/util/file.o 00:02:09.366 CC lib/util/hexlify.o 00:02:09.366 CC lib/util/iov.o 00:02:09.366 CC lib/util/math.o 00:02:09.366 CC lib/util/net.o 00:02:09.366 CC lib/util/pipe.o 00:02:09.366 CC lib/util/strerror_tls.o 00:02:09.366 CC lib/util/string.o 00:02:09.366 CC lib/util/uuid.o 00:02:09.366 CC lib/util/xor.o 00:02:09.366 CC lib/util/zipf.o 00:02:09.624 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.624 CC lib/vfio_user/host/vfio_user.o 00:02:09.624 LIB libspdk_dma.a 00:02:09.624 SO libspdk_dma.so.4.0 00:02:09.624 LIB libspdk_ioat.a 00:02:09.624 SYMLINK libspdk_dma.so 00:02:09.881 SO libspdk_ioat.so.7.0 00:02:09.881 SYMLINK libspdk_ioat.so 00:02:09.881 LIB libspdk_vfio_user.a 00:02:09.881 SO libspdk_vfio_user.so.5.0 00:02:10.139 SYMLINK libspdk_vfio_user.so 00:02:10.139 LIB libspdk_util.a 00:02:10.139 LIB libspdk_trace_parser.a 00:02:10.139 SO libspdk_trace_parser.so.5.0 00:02:10.139 SO libspdk_util.so.10.0 00:02:10.396 SYMLINK libspdk_trace_parser.so 00:02:10.396 SYMLINK libspdk_util.so 00:02:10.654 CC lib/vmd/led.o 00:02:10.654 CC lib/vmd/vmd.o 00:02:10.912 CC lib/rdma_utils/rdma_utils.o 00:02:10.912 CC lib/reduce/reduce.o 00:02:10.912 CC lib/rdma_provider/common.o 00:02:10.912 CC lib/conf/conf.o 00:02:10.912 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:10.912 CC lib/idxd/idxd_user.o 00:02:10.912 CC lib/idxd/idxd.o 00:02:10.912 CC lib/idxd/idxd_kernel.o 00:02:10.912 CC lib/json/json_parse.o 00:02:10.912 CC lib/env_dpdk/env.o 00:02:10.912 CC lib/json/json_util.o 00:02:10.912 CC lib/env_dpdk/memory.o 00:02:10.912 CC lib/json/json_write.o 00:02:10.912 CC lib/env_dpdk/pci.o 00:02:10.912 CC lib/env_dpdk/init.o 00:02:10.912 CC lib/env_dpdk/threads.o 00:02:10.912 CC lib/env_dpdk/pci_ioat.o 00:02:10.912 CC lib/env_dpdk/pci_virtio.o 00:02:10.912 CC lib/env_dpdk/pci_vmd.o 00:02:10.912 CC lib/env_dpdk/pci_idxd.o 00:02:10.912 CC lib/env_dpdk/pci_event.o 00:02:10.912 CC lib/env_dpdk/sigbus_handler.o 00:02:10.912 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:10.912 CC lib/env_dpdk/pci_dpdk.o 00:02:10.912 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.169 LIB libspdk_rdma_provider.a 00:02:11.169 LIB libspdk_conf.a 00:02:11.169 SO libspdk_rdma_provider.so.6.0 00:02:11.169 SO libspdk_conf.so.6.0 00:02:11.169 LIB libspdk_rdma_utils.a 00:02:11.169 SO libspdk_rdma_utils.so.1.0 00:02:11.169 SYMLINK libspdk_rdma_provider.so 00:02:11.169 LIB libspdk_json.a 00:02:11.169 SYMLINK libspdk_conf.so 00:02:11.169 SYMLINK libspdk_rdma_utils.so 00:02:11.169 SO libspdk_json.so.6.0 00:02:11.427 SYMLINK libspdk_json.so 00:02:11.427 LIB libspdk_idxd.a 00:02:11.686 SO libspdk_idxd.so.12.0 00:02:11.686 LIB libspdk_reduce.a 00:02:11.686 SYMLINK libspdk_idxd.so 00:02:11.686 CC lib/jsonrpc/jsonrpc_server.o 00:02:11.686 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:11.686 CC lib/jsonrpc/jsonrpc_client.o 00:02:11.686 SO libspdk_reduce.so.6.1 00:02:11.686 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:11.686 SYMLINK libspdk_reduce.so 00:02:11.943 LIB libspdk_jsonrpc.a 00:02:11.943 SO libspdk_jsonrpc.so.6.0 00:02:12.202 SYMLINK libspdk_jsonrpc.so 00:02:12.464 CC lib/rpc/rpc.o 00:02:12.464 LIB libspdk_env_dpdk.a 00:02:12.464 LIB libspdk_vmd.a 00:02:12.464 SO libspdk_vmd.so.6.0 00:02:12.760 SO libspdk_env_dpdk.so.15.0 00:02:12.760 SYMLINK libspdk_vmd.so 00:02:12.760 LIB libspdk_rpc.a 00:02:12.760 SYMLINK libspdk_env_dpdk.so 00:02:12.760 SO libspdk_rpc.so.6.0 00:02:12.760 SYMLINK libspdk_rpc.so 00:02:13.328 CC lib/trace/trace.o 00:02:13.328 CC lib/trace/trace_flags.o 00:02:13.328 CC lib/trace/trace_rpc.o 00:02:13.328 CC lib/keyring/keyring.o 00:02:13.328 CC lib/keyring/keyring_rpc.o 00:02:13.328 CC lib/notify/notify.o 00:02:13.328 CC lib/notify/notify_rpc.o 00:02:13.328 LIB libspdk_notify.a 00:02:13.588 LIB libspdk_keyring.a 00:02:13.588 SO libspdk_notify.so.6.0 00:02:13.588 SO libspdk_keyring.so.1.0 00:02:13.588 LIB libspdk_trace.a 00:02:13.588 SYMLINK libspdk_notify.so 00:02:13.588 SO libspdk_trace.so.10.0 00:02:13.588 SYMLINK libspdk_keyring.so 00:02:13.588 SYMLINK libspdk_trace.so 00:02:14.156 CC lib/sock/sock.o 00:02:14.156 CC lib/sock/sock_rpc.o 00:02:14.156 CC lib/thread/thread.o 00:02:14.156 CC lib/thread/iobuf.o 00:02:14.416 LIB libspdk_sock.a 00:02:14.416 SO libspdk_sock.so.10.0 00:02:14.675 SYMLINK libspdk_sock.so 00:02:14.933 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.933 CC lib/nvme/nvme_ctrlr.o 00:02:14.933 CC lib/nvme/nvme_fabric.o 00:02:14.933 CC lib/nvme/nvme_ns_cmd.o 00:02:14.933 CC lib/nvme/nvme_ns.o 00:02:14.933 CC lib/nvme/nvme_pcie_common.o 00:02:14.933 CC lib/nvme/nvme_pcie.o 00:02:14.933 CC lib/nvme/nvme_qpair.o 00:02:14.933 CC lib/nvme/nvme.o 00:02:14.933 CC lib/nvme/nvme_quirks.o 00:02:14.933 CC lib/nvme/nvme_transport.o 00:02:14.933 CC lib/nvme/nvme_discovery.o 00:02:14.933 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.933 CC lib/nvme/nvme_opal.o 00:02:14.933 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.933 CC lib/nvme/nvme_tcp.o 00:02:14.933 CC lib/nvme/nvme_io_msg.o 00:02:14.933 CC lib/nvme/nvme_poll_group.o 00:02:14.933 CC lib/nvme/nvme_zns.o 00:02:14.933 CC lib/nvme/nvme_stubs.o 00:02:14.933 CC lib/nvme/nvme_auth.o 00:02:14.933 CC lib/nvme/nvme_cuse.o 00:02:14.934 CC lib/nvme/nvme_rdma.o 00:02:15.869 LIB libspdk_thread.a 00:02:15.869 SO libspdk_thread.so.10.1 00:02:15.869 SYMLINK libspdk_thread.so 00:02:16.437 CC lib/init/json_config.o 00:02:16.437 CC lib/init/subsystem.o 00:02:16.437 CC lib/init/subsystem_rpc.o 00:02:16.437 CC lib/init/rpc.o 00:02:16.437 CC lib/blob/request.o 00:02:16.437 CC lib/blob/blobstore.o 00:02:16.437 CC lib/blob/blob_bs_dev.o 00:02:16.437 CC lib/blob/zeroes.o 00:02:16.437 CC lib/accel/accel_sw.o 00:02:16.437 CC lib/accel/accel.o 00:02:16.437 CC lib/virtio/virtio.o 00:02:16.437 CC lib/accel/accel_rpc.o 00:02:16.437 CC lib/virtio/virtio_vfio_user.o 00:02:16.437 CC lib/virtio/virtio_vhost_user.o 00:02:16.437 CC lib/virtio/virtio_pci.o 00:02:16.695 LIB libspdk_init.a 00:02:16.695 SO libspdk_init.so.5.0 00:02:16.695 LIB libspdk_virtio.a 00:02:16.695 SYMLINK libspdk_init.so 00:02:16.695 SO libspdk_virtio.so.7.0 00:02:16.953 SYMLINK libspdk_virtio.so 00:02:17.212 CC lib/event/app.o 00:02:17.212 CC lib/event/reactor.o 00:02:17.212 CC lib/event/log_rpc.o 00:02:17.212 CC lib/event/app_rpc.o 00:02:17.212 CC lib/event/scheduler_static.o 00:02:17.471 LIB libspdk_nvme.a 00:02:17.471 LIB libspdk_accel.a 00:02:17.730 SO libspdk_accel.so.16.0 00:02:17.730 SO libspdk_nvme.so.13.1 00:02:17.730 SYMLINK libspdk_accel.so 00:02:17.730 LIB libspdk_event.a 00:02:17.730 SO libspdk_event.so.14.0 00:02:17.988 SYMLINK libspdk_event.so 00:02:17.988 SYMLINK libspdk_nvme.so 00:02:17.988 CC lib/bdev/bdev.o 00:02:17.988 CC lib/bdev/part.o 00:02:17.988 CC lib/bdev/bdev_rpc.o 00:02:17.988 CC lib/bdev/bdev_zone.o 00:02:17.988 CC lib/bdev/scsi_nvme.o 00:02:20.523 LIB libspdk_blob.a 00:02:20.524 SO libspdk_blob.so.11.0 00:02:20.524 SYMLINK libspdk_blob.so 00:02:20.524 LIB libspdk_bdev.a 00:02:20.524 SO libspdk_bdev.so.16.0 00:02:20.524 SYMLINK libspdk_bdev.so 00:02:20.524 CC lib/lvol/lvol.o 00:02:20.524 CC lib/blobfs/blobfs.o 00:02:20.524 CC lib/blobfs/tree.o 00:02:20.783 CC lib/scsi/lun.o 00:02:20.783 CC lib/scsi/port.o 00:02:20.783 CC lib/scsi/dev.o 00:02:20.783 CC lib/scsi/scsi_pr.o 00:02:20.783 CC lib/scsi/scsi_bdev.o 00:02:20.783 CC lib/scsi/scsi.o 00:02:20.783 CC lib/scsi/scsi_rpc.o 00:02:20.783 CC lib/scsi/task.o 00:02:20.783 CC lib/ublk/ublk.o 00:02:20.783 CC lib/ublk/ublk_rpc.o 00:02:20.783 CC lib/ftl/ftl_core.o 00:02:20.783 CC lib/ftl/ftl_init.o 00:02:20.783 CC lib/ftl/ftl_layout.o 00:02:20.783 CC lib/ftl/ftl_debug.o 00:02:21.041 CC lib/ftl/ftl_io.o 00:02:21.041 CC lib/ftl/ftl_sb.o 00:02:21.041 CC lib/ftl/ftl_l2p.o 00:02:21.041 CC lib/ftl/ftl_l2p_flat.o 00:02:21.041 CC lib/ftl/ftl_nv_cache.o 00:02:21.041 CC lib/nvmf/ctrlr.o 00:02:21.041 CC lib/ftl/ftl_band.o 00:02:21.041 CC lib/ftl/ftl_band_ops.o 00:02:21.041 CC lib/nvmf/ctrlr_discovery.o 00:02:21.041 CC lib/ftl/ftl_writer.o 00:02:21.041 CC lib/ftl/ftl_rq.o 00:02:21.042 CC lib/nvmf/ctrlr_bdev.o 00:02:21.042 CC lib/ftl/ftl_reloc.o 00:02:21.042 CC lib/nvmf/nvmf.o 00:02:21.042 CC lib/nvmf/subsystem.o 00:02:21.042 CC lib/ftl/ftl_l2p_cache.o 00:02:21.042 CC lib/ftl/ftl_p2l.o 00:02:21.042 CC lib/nvmf/nvmf_rpc.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt.o 00:02:21.042 CC lib/nvmf/transport.o 00:02:21.042 CC lib/nvmf/tcp.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:21.042 CC lib/nvmf/stubs.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:21.042 CC lib/nvmf/mdns_server.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:21.042 CC lib/nbd/nbd.o 00:02:21.042 CC lib/nvmf/auth.o 00:02:21.042 CC lib/nvmf/rdma.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:21.042 CC lib/nbd/nbd_rpc.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:21.042 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:21.042 CC lib/ftl/utils/ftl_conf.o 00:02:21.042 CC lib/ftl/utils/ftl_md.o 00:02:21.042 CC lib/ftl/utils/ftl_mempool.o 00:02:21.042 CC lib/ftl/utils/ftl_bitmap.o 00:02:21.042 CC lib/ftl/utils/ftl_property.o 00:02:21.042 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:21.042 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:21.042 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:21.042 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:21.042 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:21.042 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:21.042 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:21.042 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:21.042 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:21.042 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:21.042 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:21.042 CC lib/ftl/base/ftl_base_dev.o 00:02:21.042 CC lib/ftl/base/ftl_base_bdev.o 00:02:21.042 CC lib/ftl/ftl_trace.o 00:02:21.608 LIB libspdk_nbd.a 00:02:21.867 SO libspdk_nbd.so.7.0 00:02:21.867 LIB libspdk_scsi.a 00:02:21.867 LIB libspdk_lvol.a 00:02:21.867 LIB libspdk_ublk.a 00:02:21.867 LIB libspdk_blobfs.a 00:02:21.867 SYMLINK libspdk_nbd.so 00:02:21.867 SO libspdk_scsi.so.9.0 00:02:21.867 SO libspdk_lvol.so.10.0 00:02:21.867 SO libspdk_ublk.so.3.0 00:02:21.867 SO libspdk_blobfs.so.10.0 00:02:21.867 SYMLINK libspdk_ublk.so 00:02:21.867 SYMLINK libspdk_lvol.so 00:02:21.867 SYMLINK libspdk_scsi.so 00:02:21.867 SYMLINK libspdk_blobfs.so 00:02:22.434 LIB libspdk_ftl.a 00:02:22.434 CC lib/iscsi/conn.o 00:02:22.434 CC lib/iscsi/init_grp.o 00:02:22.434 CC lib/iscsi/iscsi.o 00:02:22.434 CC lib/iscsi/md5.o 00:02:22.434 CC lib/iscsi/param.o 00:02:22.434 CC lib/vhost/vhost.o 00:02:22.434 CC lib/iscsi/portal_grp.o 00:02:22.434 CC lib/vhost/vhost_rpc.o 00:02:22.434 CC lib/iscsi/tgt_node.o 00:02:22.434 CC lib/vhost/vhost_scsi.o 00:02:22.434 CC lib/iscsi/iscsi_subsystem.o 00:02:22.434 CC lib/vhost/vhost_blk.o 00:02:22.434 CC lib/iscsi/iscsi_rpc.o 00:02:22.434 CC lib/vhost/rte_vhost_user.o 00:02:22.434 CC lib/iscsi/task.o 00:02:22.434 SO libspdk_ftl.so.9.0 00:02:23.001 SYMLINK libspdk_ftl.so 00:02:23.567 LIB libspdk_iscsi.a 00:02:23.568 LIB libspdk_vhost.a 00:02:23.568 SO libspdk_iscsi.so.8.0 00:02:23.568 SO libspdk_vhost.so.8.0 00:02:23.826 SYMLINK libspdk_vhost.so 00:02:23.826 LIB libspdk_nvmf.a 00:02:23.826 SYMLINK libspdk_iscsi.so 00:02:23.826 SO libspdk_nvmf.so.19.0 00:02:24.394 SYMLINK libspdk_nvmf.so 00:02:24.960 CC module/env_dpdk/env_dpdk_rpc.o 00:02:24.960 CC module/keyring/file/keyring.o 00:02:24.960 CC module/keyring/file/keyring_rpc.o 00:02:24.960 CC module/accel/ioat/accel_ioat.o 00:02:24.960 CC module/accel/ioat/accel_ioat_rpc.o 00:02:24.960 CC module/keyring/linux/keyring.o 00:02:24.960 CC module/accel/iaa/accel_iaa.o 00:02:24.960 CC module/accel/iaa/accel_iaa_rpc.o 00:02:24.960 LIB libspdk_env_dpdk_rpc.a 00:02:24.960 CC module/sock/posix/posix.o 00:02:24.960 CC module/keyring/linux/keyring_rpc.o 00:02:24.960 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:24.960 CC module/scheduler/gscheduler/gscheduler.o 00:02:24.960 CC module/accel/dsa/accel_dsa.o 00:02:24.960 CC module/accel/error/accel_error.o 00:02:24.960 CC module/accel/error/accel_error_rpc.o 00:02:24.960 CC module/blob/bdev/blob_bdev.o 00:02:24.960 CC module/accel/dsa/accel_dsa_rpc.o 00:02:24.960 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:24.960 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:24.960 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:24.960 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:24.960 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:24.960 SO libspdk_env_dpdk_rpc.so.6.0 00:02:24.960 SYMLINK libspdk_env_dpdk_rpc.so 00:02:25.218 LIB libspdk_keyring_file.a 00:02:25.218 LIB libspdk_accel_error.a 00:02:25.218 LIB libspdk_keyring_linux.a 00:02:25.218 LIB libspdk_scheduler_gscheduler.a 00:02:25.218 LIB libspdk_scheduler_dpdk_governor.a 00:02:25.218 SO libspdk_scheduler_gscheduler.so.4.0 00:02:25.218 SO libspdk_keyring_linux.so.1.0 00:02:25.218 SO libspdk_accel_error.so.2.0 00:02:25.218 SO libspdk_keyring_file.so.1.0 00:02:25.218 LIB libspdk_accel_ioat.a 00:02:25.218 LIB libspdk_accel_iaa.a 00:02:25.218 LIB libspdk_scheduler_dynamic.a 00:02:25.218 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:25.218 SO libspdk_accel_ioat.so.6.0 00:02:25.218 SO libspdk_scheduler_dynamic.so.4.0 00:02:25.218 SO libspdk_accel_iaa.so.3.0 00:02:25.218 SYMLINK libspdk_scheduler_gscheduler.so 00:02:25.218 SYMLINK libspdk_keyring_file.so 00:02:25.218 SYMLINK libspdk_keyring_linux.so 00:02:25.218 SYMLINK libspdk_accel_error.so 00:02:25.218 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:25.218 LIB libspdk_blob_bdev.a 00:02:25.218 SYMLINK libspdk_accel_ioat.so 00:02:25.218 SYMLINK libspdk_scheduler_dynamic.so 00:02:25.218 SYMLINK libspdk_accel_iaa.so 00:02:25.476 SO libspdk_blob_bdev.so.11.0 00:02:25.476 SYMLINK libspdk_blob_bdev.so 00:02:25.734 LIB libspdk_accel_dsa.a 00:02:25.734 SO libspdk_accel_dsa.so.5.0 00:02:25.734 SYMLINK libspdk_accel_dsa.so 00:02:25.993 LIB libspdk_sock_posix.a 00:02:25.993 SO libspdk_sock_posix.so.6.0 00:02:25.993 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:25.993 CC module/bdev/split/vbdev_split.o 00:02:25.993 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:25.993 CC module/bdev/split/vbdev_split_rpc.o 00:02:25.993 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:25.993 CC module/bdev/delay/vbdev_delay.o 00:02:25.993 CC module/bdev/gpt/gpt.o 00:02:25.993 CC module/bdev/passthru/vbdev_passthru.o 00:02:25.993 CC module/bdev/gpt/vbdev_gpt.o 00:02:25.993 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:25.993 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:25.993 CC module/bdev/lvol/vbdev_lvol.o 00:02:25.993 CC module/bdev/ftl/bdev_ftl.o 00:02:25.993 CC module/bdev/malloc/bdev_malloc.o 00:02:25.993 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:25.993 CC module/blobfs/bdev/blobfs_bdev.o 00:02:25.993 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:25.993 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:25.993 CC module/bdev/aio/bdev_aio.o 00:02:25.993 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:25.993 CC module/bdev/aio/bdev_aio_rpc.o 00:02:25.993 CC module/bdev/iscsi/bdev_iscsi.o 00:02:25.993 CC module/bdev/null/bdev_null.o 00:02:25.993 CC module/bdev/crypto/vbdev_crypto.o 00:02:25.993 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:25.993 CC module/bdev/null/bdev_null_rpc.o 00:02:25.993 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:25.993 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:25.993 CC module/bdev/raid/bdev_raid.o 00:02:25.993 CC module/bdev/raid/bdev_raid_rpc.o 00:02:25.993 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:25.993 CC module/bdev/raid/bdev_raid_sb.o 00:02:25.993 CC module/bdev/nvme/bdev_nvme.o 00:02:25.993 CC module/bdev/raid/raid1.o 00:02:25.993 CC module/bdev/raid/raid0.o 00:02:25.993 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:25.993 CC module/bdev/raid/concat.o 00:02:25.993 CC module/bdev/nvme/nvme_rpc.o 00:02:25.993 CC module/bdev/nvme/bdev_mdns_client.o 00:02:25.993 CC module/bdev/nvme/vbdev_opal.o 00:02:25.993 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:25.993 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:25.993 CC module/bdev/compress/vbdev_compress.o 00:02:25.993 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:25.993 CC module/bdev/error/vbdev_error.o 00:02:25.993 CC module/bdev/error/vbdev_error_rpc.o 00:02:25.993 SYMLINK libspdk_sock_posix.so 00:02:26.251 LIB libspdk_accel_dpdk_compressdev.a 00:02:26.251 LIB libspdk_blobfs_bdev.a 00:02:26.251 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:26.251 LIB libspdk_bdev_split.a 00:02:26.251 SO libspdk_blobfs_bdev.so.6.0 00:02:26.251 SO libspdk_bdev_split.so.6.0 00:02:26.251 LIB libspdk_bdev_passthru.a 00:02:26.251 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:26.251 LIB libspdk_bdev_null.a 00:02:26.509 SYMLINK libspdk_blobfs_bdev.so 00:02:26.509 SO libspdk_bdev_null.so.6.0 00:02:26.509 SYMLINK libspdk_bdev_split.so 00:02:26.509 LIB libspdk_bdev_gpt.a 00:02:26.509 SO libspdk_bdev_passthru.so.6.0 00:02:26.509 LIB libspdk_bdev_error.a 00:02:26.509 LIB libspdk_bdev_ftl.a 00:02:26.509 SO libspdk_bdev_gpt.so.6.0 00:02:26.509 SO libspdk_bdev_error.so.6.0 00:02:26.509 LIB libspdk_bdev_zone_block.a 00:02:26.509 SO libspdk_bdev_ftl.so.6.0 00:02:26.509 LIB libspdk_bdev_aio.a 00:02:26.509 SYMLINK libspdk_bdev_passthru.so 00:02:26.509 SYMLINK libspdk_bdev_null.so 00:02:26.509 LIB libspdk_bdev_delay.a 00:02:26.509 SO libspdk_bdev_zone_block.so.6.0 00:02:26.509 LIB libspdk_bdev_iscsi.a 00:02:26.509 SO libspdk_bdev_aio.so.6.0 00:02:26.509 LIB libspdk_bdev_malloc.a 00:02:26.509 LIB libspdk_bdev_compress.a 00:02:26.509 SYMLINK libspdk_bdev_gpt.so 00:02:26.509 SYMLINK libspdk_bdev_error.so 00:02:26.509 SO libspdk_bdev_iscsi.so.6.0 00:02:26.509 SO libspdk_bdev_delay.so.6.0 00:02:26.509 SYMLINK libspdk_bdev_ftl.so 00:02:26.509 SO libspdk_bdev_malloc.so.6.0 00:02:26.509 SO libspdk_bdev_compress.so.6.0 00:02:26.509 SYMLINK libspdk_bdev_zone_block.so 00:02:26.509 SYMLINK libspdk_bdev_aio.so 00:02:26.509 SYMLINK libspdk_bdev_iscsi.so 00:02:26.509 SYMLINK libspdk_bdev_delay.so 00:02:26.509 SYMLINK libspdk_bdev_malloc.so 00:02:26.789 SYMLINK libspdk_bdev_compress.so 00:02:26.789 LIB libspdk_accel_dpdk_cryptodev.a 00:02:26.789 LIB libspdk_bdev_lvol.a 00:02:26.789 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:26.789 LIB libspdk_bdev_virtio.a 00:02:26.789 SO libspdk_bdev_lvol.so.6.0 00:02:26.789 SO libspdk_bdev_virtio.so.6.0 00:02:26.789 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:26.789 SYMLINK libspdk_bdev_lvol.so 00:02:26.789 SYMLINK libspdk_bdev_virtio.so 00:02:26.789 LIB libspdk_bdev_crypto.a 00:02:27.048 SO libspdk_bdev_crypto.so.6.0 00:02:27.048 SYMLINK libspdk_bdev_crypto.so 00:02:27.307 LIB libspdk_bdev_raid.a 00:02:27.307 SO libspdk_bdev_raid.so.6.0 00:02:27.307 SYMLINK libspdk_bdev_raid.so 00:02:28.683 LIB libspdk_bdev_nvme.a 00:02:28.684 SO libspdk_bdev_nvme.so.7.0 00:02:28.942 SYMLINK libspdk_bdev_nvme.so 00:02:29.510 CC module/event/subsystems/vmd/vmd.o 00:02:29.510 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.510 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.510 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.510 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.510 CC module/event/subsystems/sock/sock.o 00:02:29.510 CC module/event/subsystems/keyring/keyring.o 00:02:29.510 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.768 LIB libspdk_event_vmd.a 00:02:29.768 LIB libspdk_event_vhost_blk.a 00:02:29.768 LIB libspdk_event_sock.a 00:02:29.768 LIB libspdk_event_keyring.a 00:02:29.768 SO libspdk_event_vhost_blk.so.3.0 00:02:29.768 LIB libspdk_event_iobuf.a 00:02:29.768 SO libspdk_event_vmd.so.6.0 00:02:29.768 LIB libspdk_event_scheduler.a 00:02:29.768 SO libspdk_event_sock.so.5.0 00:02:29.768 SO libspdk_event_iobuf.so.3.0 00:02:29.768 SO libspdk_event_keyring.so.1.0 00:02:29.768 SO libspdk_event_scheduler.so.4.0 00:02:29.768 SYMLINK libspdk_event_vhost_blk.so 00:02:29.768 SYMLINK libspdk_event_vmd.so 00:02:29.768 SYMLINK libspdk_event_sock.so 00:02:29.768 SYMLINK libspdk_event_keyring.so 00:02:29.768 SYMLINK libspdk_event_iobuf.so 00:02:29.768 SYMLINK libspdk_event_scheduler.so 00:02:30.337 CC module/event/subsystems/accel/accel.o 00:02:30.337 LIB libspdk_event_accel.a 00:02:30.337 SO libspdk_event_accel.so.6.0 00:02:30.337 SYMLINK libspdk_event_accel.so 00:02:30.905 CC module/event/subsystems/bdev/bdev.o 00:02:30.905 LIB libspdk_event_bdev.a 00:02:31.164 SO libspdk_event_bdev.so.6.0 00:02:31.164 SYMLINK libspdk_event_bdev.so 00:02:31.423 CC module/event/subsystems/scsi/scsi.o 00:02:31.423 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.423 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.423 CC module/event/subsystems/nbd/nbd.o 00:02:31.423 CC module/event/subsystems/ublk/ublk.o 00:02:31.682 LIB libspdk_event_nbd.a 00:02:31.682 LIB libspdk_event_scsi.a 00:02:31.682 LIB libspdk_event_ublk.a 00:02:31.682 SO libspdk_event_nbd.so.6.0 00:02:31.682 SO libspdk_event_scsi.so.6.0 00:02:31.682 SO libspdk_event_ublk.so.3.0 00:02:31.682 LIB libspdk_event_nvmf.a 00:02:31.682 SYMLINK libspdk_event_nbd.so 00:02:31.682 SO libspdk_event_nvmf.so.6.0 00:02:31.682 SYMLINK libspdk_event_scsi.so 00:02:31.682 SYMLINK libspdk_event_ublk.so 00:02:31.941 SYMLINK libspdk_event_nvmf.so 00:02:32.200 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.200 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.201 LIB libspdk_event_vhost_scsi.a 00:02:32.459 LIB libspdk_event_iscsi.a 00:02:32.459 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.459 SO libspdk_event_iscsi.so.6.0 00:02:32.459 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.459 SYMLINK libspdk_event_iscsi.so 00:02:32.718 SO libspdk.so.6.0 00:02:32.718 SYMLINK libspdk.so 00:02:32.977 CC test/rpc_client/rpc_client_test.o 00:02:32.977 TEST_HEADER include/spdk/accel_module.h 00:02:32.977 TEST_HEADER include/spdk/barrier.h 00:02:32.977 TEST_HEADER include/spdk/accel.h 00:02:32.977 TEST_HEADER include/spdk/assert.h 00:02:32.977 TEST_HEADER include/spdk/base64.h 00:02:32.977 TEST_HEADER include/spdk/bdev_module.h 00:02:32.977 TEST_HEADER include/spdk/bdev.h 00:02:32.977 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.977 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.977 TEST_HEADER include/spdk/bit_array.h 00:02:32.977 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.977 TEST_HEADER include/spdk/bit_pool.h 00:02:32.977 TEST_HEADER include/spdk/blobfs.h 00:02:32.977 CC app/trace_record/trace_record.o 00:02:32.977 CXX app/trace/trace.o 00:02:32.977 CC app/spdk_nvme_identify/identify.o 00:02:32.977 TEST_HEADER include/spdk/conf.h 00:02:32.977 TEST_HEADER include/spdk/blob.h 00:02:32.977 CC app/spdk_lspci/spdk_lspci.o 00:02:32.977 TEST_HEADER include/spdk/cpuset.h 00:02:32.977 TEST_HEADER include/spdk/config.h 00:02:32.977 TEST_HEADER include/spdk/crc32.h 00:02:32.977 TEST_HEADER include/spdk/crc16.h 00:02:32.977 TEST_HEADER include/spdk/crc64.h 00:02:32.977 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.977 TEST_HEADER include/spdk/endian.h 00:02:32.977 CC app/spdk_nvme_perf/perf.o 00:02:32.977 TEST_HEADER include/spdk/dma.h 00:02:32.977 TEST_HEADER include/spdk/dif.h 00:02:32.977 TEST_HEADER include/spdk/env.h 00:02:32.977 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.977 CC app/spdk_top/spdk_top.o 00:02:32.977 TEST_HEADER include/spdk/event.h 00:02:32.977 TEST_HEADER include/spdk/fd_group.h 00:02:32.977 TEST_HEADER include/spdk/file.h 00:02:32.977 TEST_HEADER include/spdk/fd.h 00:02:32.977 TEST_HEADER include/spdk/ftl.h 00:02:32.977 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.977 TEST_HEADER include/spdk/hexlify.h 00:02:32.977 TEST_HEADER include/spdk/idxd.h 00:02:32.977 TEST_HEADER include/spdk/histogram_data.h 00:02:32.977 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.977 TEST_HEADER include/spdk/ioat.h 00:02:32.977 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.977 TEST_HEADER include/spdk/init.h 00:02:32.977 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.977 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.977 TEST_HEADER include/spdk/json.h 00:02:32.977 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.977 TEST_HEADER include/spdk/keyring.h 00:02:32.977 TEST_HEADER include/spdk/keyring_module.h 00:02:32.977 TEST_HEADER include/spdk/likely.h 00:02:32.977 TEST_HEADER include/spdk/log.h 00:02:32.977 TEST_HEADER include/spdk/lvol.h 00:02:32.977 TEST_HEADER include/spdk/mmio.h 00:02:32.977 TEST_HEADER include/spdk/memory.h 00:02:32.977 TEST_HEADER include/spdk/net.h 00:02:32.977 TEST_HEADER include/spdk/nbd.h 00:02:32.977 TEST_HEADER include/spdk/nvme.h 00:02:32.977 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.977 TEST_HEADER include/spdk/notify.h 00:02:32.977 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.977 CC app/spdk_dd/spdk_dd.o 00:02:32.977 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.977 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.977 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.977 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.977 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.977 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.977 TEST_HEADER include/spdk/nvmf.h 00:02:32.977 TEST_HEADER include/spdk/opal.h 00:02:32.977 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.977 TEST_HEADER include/spdk/opal_spec.h 00:02:32.977 CC app/nvmf_tgt/nvmf_main.o 00:02:32.977 TEST_HEADER include/spdk/pipe.h 00:02:32.977 TEST_HEADER include/spdk/pci_ids.h 00:02:32.977 TEST_HEADER include/spdk/rpc.h 00:02:32.977 TEST_HEADER include/spdk/queue.h 00:02:32.977 TEST_HEADER include/spdk/reduce.h 00:02:32.977 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.977 TEST_HEADER include/spdk/scheduler.h 00:02:32.977 TEST_HEADER include/spdk/scsi.h 00:02:32.977 CC app/spdk_tgt/spdk_tgt.o 00:02:32.977 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.977 TEST_HEADER include/spdk/sock.h 00:02:32.977 TEST_HEADER include/spdk/stdinc.h 00:02:32.977 TEST_HEADER include/spdk/string.h 00:02:32.977 TEST_HEADER include/spdk/thread.h 00:02:32.977 TEST_HEADER include/spdk/trace_parser.h 00:02:32.977 TEST_HEADER include/spdk/trace.h 00:02:33.243 TEST_HEADER include/spdk/tree.h 00:02:33.243 TEST_HEADER include/spdk/ublk.h 00:02:33.243 TEST_HEADER include/spdk/util.h 00:02:33.243 TEST_HEADER include/spdk/uuid.h 00:02:33.243 TEST_HEADER include/spdk/version.h 00:02:33.243 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:33.243 TEST_HEADER include/spdk/vhost.h 00:02:33.243 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:33.243 TEST_HEADER include/spdk/vmd.h 00:02:33.243 TEST_HEADER include/spdk/xor.h 00:02:33.243 TEST_HEADER include/spdk/zipf.h 00:02:33.243 CXX test/cpp_headers/accel_module.o 00:02:33.243 CXX test/cpp_headers/accel.o 00:02:33.243 CXX test/cpp_headers/assert.o 00:02:33.243 CXX test/cpp_headers/barrier.o 00:02:33.243 CXX test/cpp_headers/base64.o 00:02:33.243 CXX test/cpp_headers/bdev.o 00:02:33.243 CXX test/cpp_headers/bdev_module.o 00:02:33.243 CXX test/cpp_headers/bdev_zone.o 00:02:33.243 CXX test/cpp_headers/bit_array.o 00:02:33.243 CXX test/cpp_headers/blob_bdev.o 00:02:33.243 CXX test/cpp_headers/blobfs_bdev.o 00:02:33.243 CXX test/cpp_headers/bit_pool.o 00:02:33.243 CXX test/cpp_headers/blobfs.o 00:02:33.243 CXX test/cpp_headers/blob.o 00:02:33.243 CXX test/cpp_headers/config.o 00:02:33.243 CXX test/cpp_headers/cpuset.o 00:02:33.243 CXX test/cpp_headers/conf.o 00:02:33.243 CXX test/cpp_headers/crc64.o 00:02:33.243 CXX test/cpp_headers/crc32.o 00:02:33.243 CXX test/cpp_headers/crc16.o 00:02:33.243 CXX test/cpp_headers/dif.o 00:02:33.243 CXX test/cpp_headers/dma.o 00:02:33.243 CXX test/cpp_headers/env_dpdk.o 00:02:33.243 CXX test/cpp_headers/endian.o 00:02:33.243 CXX test/cpp_headers/env.o 00:02:33.243 CXX test/cpp_headers/event.o 00:02:33.243 CXX test/cpp_headers/fd_group.o 00:02:33.243 CXX test/cpp_headers/file.o 00:02:33.243 CXX test/cpp_headers/fd.o 00:02:33.243 CXX test/cpp_headers/hexlify.o 00:02:33.243 CXX test/cpp_headers/gpt_spec.o 00:02:33.243 CXX test/cpp_headers/ftl.o 00:02:33.243 CXX test/cpp_headers/histogram_data.o 00:02:33.243 CXX test/cpp_headers/idxd.o 00:02:33.243 CXX test/cpp_headers/idxd_spec.o 00:02:33.243 CXX test/cpp_headers/init.o 00:02:33.243 CXX test/cpp_headers/ioat.o 00:02:33.243 CXX test/cpp_headers/iscsi_spec.o 00:02:33.243 CXX test/cpp_headers/ioat_spec.o 00:02:33.243 CXX test/cpp_headers/jsonrpc.o 00:02:33.243 CXX test/cpp_headers/keyring_module.o 00:02:33.243 CXX test/cpp_headers/json.o 00:02:33.243 CXX test/cpp_headers/keyring.o 00:02:33.243 CXX test/cpp_headers/likely.o 00:02:33.243 CXX test/cpp_headers/log.o 00:02:33.243 CXX test/cpp_headers/lvol.o 00:02:33.243 CXX test/cpp_headers/memory.o 00:02:33.243 CXX test/cpp_headers/nbd.o 00:02:33.243 CXX test/cpp_headers/mmio.o 00:02:33.243 CXX test/cpp_headers/net.o 00:02:33.243 CXX test/cpp_headers/notify.o 00:02:33.243 CXX test/cpp_headers/nvme.o 00:02:33.243 CXX test/cpp_headers/nvme_intel.o 00:02:33.243 CXX test/cpp_headers/nvme_ocssd.o 00:02:33.243 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:33.243 CXX test/cpp_headers/nvme_spec.o 00:02:33.243 CXX test/cpp_headers/nvme_zns.o 00:02:33.243 CXX test/cpp_headers/nvmf_cmd.o 00:02:33.243 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:33.243 CXX test/cpp_headers/nvmf.o 00:02:33.243 CXX test/cpp_headers/nvmf_spec.o 00:02:33.243 CXX test/cpp_headers/nvmf_transport.o 00:02:33.243 CXX test/cpp_headers/opal.o 00:02:33.243 CXX test/cpp_headers/opal_spec.o 00:02:33.243 CXX test/cpp_headers/pci_ids.o 00:02:33.243 CXX test/cpp_headers/pipe.o 00:02:33.243 CXX test/cpp_headers/queue.o 00:02:33.243 CC test/app/stub/stub.o 00:02:33.243 CXX test/cpp_headers/reduce.o 00:02:33.243 CXX test/cpp_headers/rpc.o 00:02:33.243 CXX test/cpp_headers/scheduler.o 00:02:33.243 CXX test/cpp_headers/scsi.o 00:02:33.243 CXX test/cpp_headers/scsi_spec.o 00:02:33.243 CXX test/cpp_headers/sock.o 00:02:33.243 CC test/app/jsoncat/jsoncat.o 00:02:33.243 CXX test/cpp_headers/stdinc.o 00:02:33.243 CXX test/cpp_headers/string.o 00:02:33.243 CXX test/cpp_headers/thread.o 00:02:33.243 CXX test/cpp_headers/trace.o 00:02:33.243 CXX test/cpp_headers/trace_parser.o 00:02:33.243 CC test/env/memory/memory_ut.o 00:02:33.243 CXX test/cpp_headers/tree.o 00:02:33.243 CXX test/cpp_headers/ublk.o 00:02:33.243 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:33.243 CXX test/cpp_headers/util.o 00:02:33.243 CXX test/cpp_headers/uuid.o 00:02:33.243 CC examples/util/zipf/zipf.o 00:02:33.243 CC test/app/histogram_perf/histogram_perf.o 00:02:33.243 CC examples/ioat/verify/verify.o 00:02:33.243 CC test/env/vtophys/vtophys.o 00:02:33.243 CC test/dma/test_dma/test_dma.o 00:02:33.243 CC test/thread/poller_perf/poller_perf.o 00:02:33.243 CC test/env/pci/pci_ut.o 00:02:33.244 CXX test/cpp_headers/version.o 00:02:33.244 CC app/fio/nvme/fio_plugin.o 00:02:33.244 CC test/app/bdev_svc/bdev_svc.o 00:02:33.244 CC examples/ioat/perf/perf.o 00:02:33.526 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.526 CC app/fio/bdev/fio_plugin.o 00:02:33.526 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.526 LINK spdk_lspci 00:02:33.792 LINK rpc_client_test 00:02:33.792 LINK interrupt_tgt 00:02:33.792 LINK spdk_tgt 00:02:34.058 LINK nvmf_tgt 00:02:34.058 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:34.058 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:34.058 CC test/env/mem_callbacks/mem_callbacks.o 00:02:34.058 CXX test/cpp_headers/vhost.o 00:02:34.058 LINK histogram_perf 00:02:34.058 LINK jsoncat 00:02:34.058 LINK vtophys 00:02:34.058 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:34.058 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:34.058 CXX test/cpp_headers/vmd.o 00:02:34.058 LINK poller_perf 00:02:34.058 CXX test/cpp_headers/xor.o 00:02:34.058 CXX test/cpp_headers/zipf.o 00:02:34.058 LINK iscsi_tgt 00:02:34.058 LINK env_dpdk_post_init 00:02:34.058 LINK spdk_trace_record 00:02:34.058 LINK stub 00:02:34.058 LINK zipf 00:02:34.058 LINK bdev_svc 00:02:34.316 LINK verify 00:02:34.316 LINK ioat_perf 00:02:34.316 LINK spdk_nvme_discover 00:02:34.316 LINK spdk_dd 00:02:34.316 LINK spdk_trace 00:02:34.316 LINK test_dma 00:02:34.574 LINK pci_ut 00:02:34.574 LINK nvme_fuzz 00:02:34.574 LINK spdk_bdev 00:02:34.574 LINK vhost_fuzz 00:02:34.574 CC test/event/reactor/reactor.o 00:02:34.574 CC test/event/event_perf/event_perf.o 00:02:34.574 LINK mem_callbacks 00:02:34.574 CC test/event/reactor_perf/reactor_perf.o 00:02:34.574 LINK spdk_nvme 00:02:34.574 CC examples/vmd/led/led.o 00:02:34.574 CC examples/idxd/perf/perf.o 00:02:34.574 CC test/event/app_repeat/app_repeat.o 00:02:34.574 CC examples/vmd/lsvmd/lsvmd.o 00:02:34.574 CC examples/sock/hello_world/hello_sock.o 00:02:34.574 CC examples/thread/thread/thread_ex.o 00:02:34.574 CC test/event/scheduler/scheduler.o 00:02:34.834 CC app/vhost/vhost.o 00:02:34.834 LINK reactor 00:02:34.834 LINK event_perf 00:02:34.834 LINK spdk_nvme_perf 00:02:34.834 LINK reactor_perf 00:02:34.834 LINK led 00:02:34.834 LINK spdk_nvme_identify 00:02:34.834 LINK lsvmd 00:02:34.834 LINK app_repeat 00:02:34.834 LINK spdk_top 00:02:35.092 LINK vhost 00:02:35.092 LINK scheduler 00:02:35.092 LINK hello_sock 00:02:35.092 LINK thread 00:02:35.092 LINK idxd_perf 00:02:35.351 CC test/nvme/simple_copy/simple_copy.o 00:02:35.351 CC test/nvme/connect_stress/connect_stress.o 00:02:35.351 CC test/nvme/e2edp/nvme_dp.o 00:02:35.351 CC test/nvme/startup/startup.o 00:02:35.351 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:35.351 CC test/nvme/reset/reset.o 00:02:35.351 CC test/nvme/sgl/sgl.o 00:02:35.351 CC test/nvme/compliance/nvme_compliance.o 00:02:35.351 CC test/nvme/overhead/overhead.o 00:02:35.351 CC test/nvme/reserve/reserve.o 00:02:35.351 CC test/nvme/fdp/fdp.o 00:02:35.351 CC test/nvme/fused_ordering/fused_ordering.o 00:02:35.351 CC test/nvme/aer/aer.o 00:02:35.351 CC test/nvme/cuse/cuse.o 00:02:35.351 CC test/nvme/err_injection/err_injection.o 00:02:35.351 CC test/accel/dif/dif.o 00:02:35.351 CC test/nvme/boot_partition/boot_partition.o 00:02:35.351 LINK memory_ut 00:02:35.351 CC test/blobfs/mkfs/mkfs.o 00:02:35.351 CC test/lvol/esnap/esnap.o 00:02:35.610 LINK startup 00:02:35.610 LINK fused_ordering 00:02:35.610 LINK boot_partition 00:02:35.610 LINK connect_stress 00:02:35.610 LINK doorbell_aers 00:02:35.610 LINK err_injection 00:02:35.610 CC examples/nvme/arbitration/arbitration.o 00:02:35.610 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:35.610 LINK reserve 00:02:35.610 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:35.610 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:35.610 CC examples/nvme/hotplug/hotplug.o 00:02:35.610 LINK simple_copy 00:02:35.610 CC examples/nvme/reconnect/reconnect.o 00:02:35.610 CC examples/nvme/abort/abort.o 00:02:35.610 CC examples/nvme/hello_world/hello_world.o 00:02:35.610 LINK mkfs 00:02:35.610 LINK reset 00:02:35.610 CC examples/accel/perf/accel_perf.o 00:02:35.610 LINK nvme_dp 00:02:35.610 LINK sgl 00:02:35.610 LINK overhead 00:02:35.610 LINK aer 00:02:35.610 CC examples/blob/cli/blobcli.o 00:02:35.610 CC examples/blob/hello_world/hello_blob.o 00:02:35.610 LINK fdp 00:02:35.610 LINK nvme_compliance 00:02:35.870 LINK pmr_persistence 00:02:35.870 LINK cmb_copy 00:02:35.870 LINK iscsi_fuzz 00:02:35.870 LINK hotplug 00:02:35.870 LINK hello_world 00:02:35.870 LINK dif 00:02:35.870 LINK arbitration 00:02:35.870 LINK hello_blob 00:02:36.128 LINK reconnect 00:02:36.128 LINK abort 00:02:36.128 LINK nvme_manage 00:02:36.128 LINK accel_perf 00:02:36.387 LINK blobcli 00:02:36.387 LINK cuse 00:02:36.650 CC test/bdev/bdevio/bdevio.o 00:02:36.908 CC examples/bdev/hello_world/hello_bdev.o 00:02:36.908 CC examples/bdev/bdevperf/bdevperf.o 00:02:36.908 LINK bdevio 00:02:37.168 LINK hello_bdev 00:02:37.736 LINK bdevperf 00:02:38.304 CC examples/nvmf/nvmf/nvmf.o 00:02:38.871 LINK nvmf 00:02:42.189 LINK esnap 00:02:42.189 00:02:42.189 real 1m27.827s 00:02:42.189 user 16m22.799s 00:02:42.189 sys 5m28.266s 00:02:42.189 16:19:38 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:42.189 16:19:38 make -- common/autotest_common.sh@10 -- $ set +x 00:02:42.189 ************************************ 00:02:42.189 END TEST make 00:02:42.189 ************************************ 00:02:42.189 16:19:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:42.189 16:19:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:42.189 16:19:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:42.189 16:19:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.189 16:19:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:42.189 16:19:38 -- pm/common@44 -- $ pid=1373860 00:02:42.189 16:19:38 -- pm/common@50 -- $ kill -TERM 1373860 00:02:42.189 16:19:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.189 16:19:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:42.189 16:19:38 -- pm/common@44 -- $ pid=1373862 00:02:42.189 16:19:38 -- pm/common@50 -- $ kill -TERM 1373862 00:02:42.189 16:19:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.189 16:19:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:42.189 16:19:38 -- pm/common@44 -- $ pid=1373864 00:02:42.189 16:19:38 -- pm/common@50 -- $ kill -TERM 1373864 00:02:42.189 16:19:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.189 16:19:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:42.189 16:19:38 -- pm/common@44 -- $ pid=1373889 00:02:42.189 16:19:38 -- pm/common@50 -- $ sudo -E kill -TERM 1373889 00:02:42.189 16:19:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:42.189 16:19:39 -- nvmf/common.sh@7 -- # uname -s 00:02:42.189 16:19:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:42.189 16:19:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:42.189 16:19:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:42.189 16:19:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:42.189 16:19:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:42.189 16:19:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:42.189 16:19:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:42.189 16:19:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:42.189 16:19:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:42.189 16:19:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:42.449 16:19:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:42.449 16:19:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:42.449 16:19:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:42.449 16:19:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:42.449 16:19:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:42.449 16:19:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:42.449 16:19:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:42.449 16:19:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:42.449 16:19:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:42.449 16:19:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:42.449 16:19:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.449 16:19:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.449 16:19:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.449 16:19:39 -- paths/export.sh@5 -- # export PATH 00:02:42.449 16:19:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:42.449 16:19:39 -- nvmf/common.sh@47 -- # : 0 00:02:42.449 16:19:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:42.449 16:19:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:42.449 16:19:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:42.449 16:19:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:42.449 16:19:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:42.449 16:19:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:42.449 16:19:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:42.449 16:19:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:42.449 16:19:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:42.449 16:19:39 -- spdk/autotest.sh@32 -- # uname -s 00:02:42.449 16:19:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:42.449 16:19:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:42.449 16:19:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:42.449 16:19:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:42.449 16:19:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:42.450 16:19:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:42.450 16:19:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:42.450 16:19:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:42.450 16:19:39 -- spdk/autotest.sh@48 -- # udevadm_pid=1445405 00:02:42.450 16:19:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:42.450 16:19:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:42.450 16:19:39 -- pm/common@17 -- # local monitor 00:02:42.450 16:19:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.450 16:19:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.450 16:19:39 -- pm/common@21 -- # date +%s 00:02:42.450 16:19:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.450 16:19:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:42.450 16:19:39 -- pm/common@21 -- # date +%s 00:02:42.450 16:19:39 -- pm/common@25 -- # sleep 1 00:02:42.450 16:19:39 -- pm/common@21 -- # date +%s 00:02:42.450 16:19:39 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721830779 00:02:42.450 16:19:39 -- pm/common@21 -- # date +%s 00:02:42.450 16:19:39 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721830779 00:02:42.450 16:19:39 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721830779 00:02:42.450 16:19:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721830779 00:02:42.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721830779_collect-vmstat.pm.log 00:02:42.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721830779_collect-cpu-load.pm.log 00:02:42.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721830779_collect-cpu-temp.pm.log 00:02:42.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721830779_collect-bmc-pm.bmc.pm.log 00:02:43.388 16:19:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:43.388 16:19:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:43.388 16:19:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:43.388 16:19:40 -- common/autotest_common.sh@10 -- # set +x 00:02:43.388 16:19:40 -- spdk/autotest.sh@59 -- # create_test_list 00:02:43.388 16:19:40 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:43.388 16:19:40 -- common/autotest_common.sh@10 -- # set +x 00:02:43.388 16:19:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:43.388 16:19:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:43.388 16:19:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:43.388 16:19:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:43.388 16:19:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:43.388 16:19:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:43.388 16:19:40 -- common/autotest_common.sh@1455 -- # uname 00:02:43.388 16:19:40 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:43.388 16:19:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:43.388 16:19:40 -- common/autotest_common.sh@1475 -- # uname 00:02:43.388 16:19:40 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:43.388 16:19:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:43.388 16:19:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:43.388 16:19:40 -- spdk/autotest.sh@72 -- # hash lcov 00:02:43.388 16:19:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:43.388 16:19:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:43.388 --rc lcov_branch_coverage=1 00:02:43.388 --rc lcov_function_coverage=1 00:02:43.388 --rc genhtml_branch_coverage=1 00:02:43.388 --rc genhtml_function_coverage=1 00:02:43.388 --rc genhtml_legend=1 00:02:43.388 --rc geninfo_all_blocks=1 00:02:43.388 ' 00:02:43.388 16:19:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:43.388 --rc lcov_branch_coverage=1 00:02:43.388 --rc lcov_function_coverage=1 00:02:43.388 --rc genhtml_branch_coverage=1 00:02:43.388 --rc genhtml_function_coverage=1 00:02:43.388 --rc genhtml_legend=1 00:02:43.388 --rc geninfo_all_blocks=1 00:02:43.388 ' 00:02:43.388 16:19:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:43.388 --rc lcov_branch_coverage=1 00:02:43.388 --rc lcov_function_coverage=1 00:02:43.388 --rc genhtml_branch_coverage=1 00:02:43.388 --rc genhtml_function_coverage=1 00:02:43.388 --rc genhtml_legend=1 00:02:43.388 --rc geninfo_all_blocks=1 00:02:43.388 --no-external' 00:02:43.388 16:19:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:43.388 --rc lcov_branch_coverage=1 00:02:43.388 --rc lcov_function_coverage=1 00:02:43.388 --rc genhtml_branch_coverage=1 00:02:43.388 --rc genhtml_function_coverage=1 00:02:43.388 --rc genhtml_legend=1 00:02:43.388 --rc geninfo_all_blocks=1 00:02:43.388 --no-external' 00:02:43.388 16:19:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:43.647 lcov: LCOV version 1.14 00:02:43.647 16:19:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:50.213 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:50.213 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:12.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:12.153 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:12.154 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:12.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:12.155 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:12.155 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:16.348 16:20:13 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:16.348 16:20:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:16.348 16:20:13 -- common/autotest_common.sh@10 -- # set +x 00:03:16.348 16:20:13 -- spdk/autotest.sh@91 -- # rm -f 00:03:16.348 16:20:13 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.542 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:20.542 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:20.542 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:20.542 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:20.542 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:20.542 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:20.858 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:20.858 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.858 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.859 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:21.126 16:20:17 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:21.126 16:20:17 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:21.126 16:20:17 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:21.126 16:20:17 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:21.126 16:20:17 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:21.126 16:20:17 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:21.126 16:20:17 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:21.126 16:20:17 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.126 16:20:17 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:21.126 16:20:17 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:21.126 16:20:17 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:21.126 16:20:17 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:21.126 16:20:17 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:21.126 16:20:17 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:21.126 16:20:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:21.126 No valid GPT data, bailing 00:03:21.126 16:20:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:21.126 16:20:17 -- scripts/common.sh@391 -- # pt= 00:03:21.126 16:20:17 -- scripts/common.sh@392 -- # return 1 00:03:21.126 16:20:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:21.126 1+0 records in 00:03:21.126 1+0 records out 00:03:21.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634668 s, 165 MB/s 00:03:21.126 16:20:17 -- spdk/autotest.sh@118 -- # sync 00:03:21.127 16:20:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:21.127 16:20:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:21.127 16:20:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.695 16:20:23 -- spdk/autotest.sh@124 -- # uname -s 00:03:27.695 16:20:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:27.695 16:20:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:27.695 16:20:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.695 16:20:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.695 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:03:27.695 ************************************ 00:03:27.695 START TEST setup.sh 00:03:27.695 ************************************ 00:03:27.695 16:20:23 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:27.695 * Looking for test storage... 00:03:27.695 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:27.695 16:20:24 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:27.695 16:20:24 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:27.695 16:20:24 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:27.695 16:20:24 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:27.695 16:20:24 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:27.695 16:20:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:27.695 ************************************ 00:03:27.695 START TEST acl 00:03:27.695 ************************************ 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:27.695 * Looking for test storage... 00:03:27.695 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:27.695 16:20:24 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.695 16:20:24 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.695 16:20:24 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:27.695 16:20:24 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:27.695 16:20:24 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:27.695 16:20:24 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:27.695 16:20:24 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:27.695 16:20:24 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.695 16:20:24 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.888 16:20:28 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:31.888 16:20:28 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:31.888 16:20:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.888 16:20:28 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:31.888 16:20:28 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.888 16:20:28 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:36.081 Hugepages 00:03:36.081 node hugesize free / total 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 00:03:36.081 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:36.081 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:36.082 16:20:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:36.342 16:20:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:36.342 16:20:33 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.342 16:20:33 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.342 16:20:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:36.342 ************************************ 00:03:36.342 START TEST denied 00:03:36.342 ************************************ 00:03:36.342 16:20:33 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:36.342 16:20:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:36.342 16:20:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:36.342 16:20:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:36.342 16:20:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.342 16:20:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:41.626 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:41.626 16:20:37 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.900 00:03:46.900 real 0m9.653s 00:03:46.900 user 0m3.013s 00:03:46.900 sys 0m5.921s 00:03:46.900 16:20:42 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:46.900 16:20:42 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:46.900 ************************************ 00:03:46.900 END TEST denied 00:03:46.900 ************************************ 00:03:46.900 16:20:42 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:46.900 16:20:42 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.900 16:20:42 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.900 16:20:42 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.900 ************************************ 00:03:46.900 START TEST allowed 00:03:46.900 ************************************ 00:03:46.900 16:20:42 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:46.900 16:20:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:46.900 16:20:42 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:46.900 16:20:42 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:46.900 16:20:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.900 16:20:42 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:52.218 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.218 16:20:49 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:52.218 16:20:49 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:52.218 16:20:49 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:52.218 16:20:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.218 16:20:49 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.505 00:03:57.505 real 0m10.941s 00:03:57.505 user 0m3.034s 00:03:57.506 sys 0m6.153s 00:03:57.506 16:20:53 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.506 16:20:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:57.506 ************************************ 00:03:57.506 END TEST allowed 00:03:57.506 ************************************ 00:03:57.506 00:03:57.506 real 0m29.686s 00:03:57.506 user 0m9.208s 00:03:57.506 sys 0m18.317s 00:03:57.506 16:20:53 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:57.506 16:20:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.506 ************************************ 00:03:57.506 END TEST acl 00:03:57.506 ************************************ 00:03:57.506 16:20:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:57.506 16:20:53 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.506 16:20:53 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.506 16:20:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.506 ************************************ 00:03:57.506 START TEST hugepages 00:03:57.506 ************************************ 00:03:57.506 16:20:53 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:57.506 * Looking for test storage... 00:03:57.506 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41138936 kB' 'MemAvailable: 45123464 kB' 'Buffers: 6064 kB' 'Cached: 10829800 kB' 'SwapCached: 0 kB' 'Active: 7647128 kB' 'Inactive: 3689560 kB' 'Active(anon): 7248704 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504112 kB' 'Mapped: 207784 kB' 'Shmem: 6747880 kB' 'KReclaimable: 544252 kB' 'Slab: 1190628 kB' 'SReclaimable: 544252 kB' 'SUnreclaim: 646376 kB' 'KernelStack: 22192 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8724340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218700 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.506 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:57.507 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:57.508 16:20:54 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:57.508 16:20:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:57.508 16:20:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:57.508 16:20:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.508 ************************************ 00:03:57.508 START TEST default_setup 00:03:57.508 ************************************ 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.508 16:20:54 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:01.701 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:03.606 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43256996 kB' 'MemAvailable: 47240820 kB' 'Buffers: 6064 kB' 'Cached: 10829940 kB' 'SwapCached: 0 kB' 'Active: 7672288 kB' 'Inactive: 3689560 kB' 'Active(anon): 7273864 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529404 kB' 'Mapped: 208732 kB' 'Shmem: 6748020 kB' 'KReclaimable: 543548 kB' 'Slab: 1189044 kB' 'SReclaimable: 543548 kB' 'SUnreclaim: 645496 kB' 'KernelStack: 22432 kB' 'PageTables: 9740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8757696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218784 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.870 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.871 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43256532 kB' 'MemAvailable: 47240356 kB' 'Buffers: 6064 kB' 'Cached: 10829944 kB' 'SwapCached: 0 kB' 'Active: 7671764 kB' 'Inactive: 3689560 kB' 'Active(anon): 7273340 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529472 kB' 'Mapped: 208924 kB' 'Shmem: 6748024 kB' 'KReclaimable: 543548 kB' 'Slab: 1188992 kB' 'SReclaimable: 543548 kB' 'SUnreclaim: 645444 kB' 'KernelStack: 22400 kB' 'PageTables: 9380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8757848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218816 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.872 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.873 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43259144 kB' 'MemAvailable: 47242968 kB' 'Buffers: 6064 kB' 'Cached: 10829972 kB' 'SwapCached: 0 kB' 'Active: 7668272 kB' 'Inactive: 3689560 kB' 'Active(anon): 7269848 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525308 kB' 'Mapped: 208452 kB' 'Shmem: 6748052 kB' 'KReclaimable: 543548 kB' 'Slab: 1189052 kB' 'SReclaimable: 543548 kB' 'SUnreclaim: 645504 kB' 'KernelStack: 22544 kB' 'PageTables: 10236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8755552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.874 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.875 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.876 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.877 nr_hugepages=1024 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.877 resv_hugepages=0 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.877 surplus_hugepages=0 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.877 anon_hugepages=0 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.877 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43253464 kB' 'MemAvailable: 47237288 kB' 'Buffers: 6064 kB' 'Cached: 10829992 kB' 'SwapCached: 0 kB' 'Active: 7675216 kB' 'Inactive: 3689560 kB' 'Active(anon): 7276792 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532252 kB' 'Mapped: 208692 kB' 'Shmem: 6748072 kB' 'KReclaimable: 543548 kB' 'Slab: 1189052 kB' 'SReclaimable: 543548 kB' 'SUnreclaim: 645504 kB' 'KernelStack: 22224 kB' 'PageTables: 9508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8760912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218784 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.878 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.879 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.880 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25786028 kB' 'MemUsed: 6853112 kB' 'SwapCached: 0 kB' 'Active: 2903328 kB' 'Inactive: 231284 kB' 'Active(anon): 2770280 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812728 kB' 'Mapped: 90144 kB' 'AnonPages: 325188 kB' 'Shmem: 2448396 kB' 'KernelStack: 12024 kB' 'PageTables: 5248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215064 kB' 'Slab: 516136 kB' 'SReclaimable: 215064 kB' 'SUnreclaim: 301072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.881 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.882 node0=1024 expecting 1024 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.882 00:04:03.882 real 0m6.606s 00:04:03.882 user 0m1.762s 00:04:03.882 sys 0m2.971s 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:03.882 16:21:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:03.882 ************************************ 00:04:03.882 END TEST default_setup 00:04:03.882 ************************************ 00:04:04.141 16:21:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:04.141 16:21:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:04.141 16:21:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:04.141 16:21:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.141 ************************************ 00:04:04.141 START TEST per_node_1G_alloc 00:04:04.141 ************************************ 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.141 16:21:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:08.339 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.339 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.339 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43244184 kB' 'MemAvailable: 47227976 kB' 'Buffers: 6064 kB' 'Cached: 10830100 kB' 'SwapCached: 0 kB' 'Active: 7669636 kB' 'Inactive: 3689560 kB' 'Active(anon): 7271212 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525732 kB' 'Mapped: 207724 kB' 'Shmem: 6748180 kB' 'KReclaimable: 543516 kB' 'Slab: 1187864 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644348 kB' 'KernelStack: 22128 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8741292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218864 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.340 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43245480 kB' 'MemAvailable: 47229272 kB' 'Buffers: 6064 kB' 'Cached: 10830104 kB' 'SwapCached: 0 kB' 'Active: 7668908 kB' 'Inactive: 3689560 kB' 'Active(anon): 7270484 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525512 kB' 'Mapped: 207712 kB' 'Shmem: 6748184 kB' 'KReclaimable: 543516 kB' 'Slab: 1187932 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644416 kB' 'KernelStack: 22128 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8741312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218816 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.341 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43250892 kB' 'MemAvailable: 47234684 kB' 'Buffers: 6064 kB' 'Cached: 10830116 kB' 'SwapCached: 0 kB' 'Active: 7668928 kB' 'Inactive: 3689560 kB' 'Active(anon): 7270504 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525544 kB' 'Mapped: 207712 kB' 'Shmem: 6748196 kB' 'KReclaimable: 543516 kB' 'Slab: 1187924 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644408 kB' 'KernelStack: 22128 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8741448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218816 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.342 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.343 nr_hugepages=1024 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.343 resv_hugepages=0 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.343 surplus_hugepages=0 00:04:08.343 16:21:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.343 anon_hugepages=0 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43251436 kB' 'MemAvailable: 47235228 kB' 'Buffers: 6064 kB' 'Cached: 10830144 kB' 'SwapCached: 0 kB' 'Active: 7668988 kB' 'Inactive: 3689560 kB' 'Active(anon): 7270564 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525652 kB' 'Mapped: 207772 kB' 'Shmem: 6748224 kB' 'KReclaimable: 543516 kB' 'Slab: 1187924 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644408 kB' 'KernelStack: 22112 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8741356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218800 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.343 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26862596 kB' 'MemUsed: 5776544 kB' 'SwapCached: 0 kB' 'Active: 2897552 kB' 'Inactive: 231284 kB' 'Active(anon): 2764504 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812776 kB' 'Mapped: 89364 kB' 'AnonPages: 319200 kB' 'Shmem: 2448444 kB' 'KernelStack: 11848 kB' 'PageTables: 5048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215032 kB' 'Slab: 515520 kB' 'SReclaimable: 215032 kB' 'SUnreclaim: 300488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.344 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16388176 kB' 'MemUsed: 11267904 kB' 'SwapCached: 0 kB' 'Active: 4772732 kB' 'Inactive: 3458276 kB' 'Active(anon): 4507356 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8023472 kB' 'Mapped: 118408 kB' 'AnonPages: 207648 kB' 'Shmem: 4299820 kB' 'KernelStack: 10296 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328484 kB' 'Slab: 672396 kB' 'SReclaimable: 328484 kB' 'SUnreclaim: 343912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.345 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:08.346 node0=512 expecting 512 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:08.346 node1=512 expecting 512 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:08.346 00:04:08.346 real 0m4.315s 00:04:08.346 user 0m1.505s 00:04:08.346 sys 0m2.862s 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.346 16:21:05 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.346 ************************************ 00:04:08.346 END TEST per_node_1G_alloc 00:04:08.346 ************************************ 00:04:08.346 16:21:05 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:08.346 16:21:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.346 16:21:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.346 16:21:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.605 ************************************ 00:04:08.605 START TEST even_2G_alloc 00:04:08.605 ************************************ 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.605 16:21:05 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:12.835 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.835 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.835 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43195656 kB' 'MemAvailable: 47179448 kB' 'Buffers: 6064 kB' 'Cached: 10830280 kB' 'SwapCached: 0 kB' 'Active: 7665612 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267188 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521776 kB' 'Mapped: 207900 kB' 'Shmem: 6748360 kB' 'KReclaimable: 543516 kB' 'Slab: 1188020 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644504 kB' 'KernelStack: 22192 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8769944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.836 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43198536 kB' 'MemAvailable: 47182328 kB' 'Buffers: 6064 kB' 'Cached: 10830284 kB' 'SwapCached: 0 kB' 'Active: 7664928 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266504 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521532 kB' 'Mapped: 207804 kB' 'Shmem: 6748364 kB' 'KReclaimable: 543516 kB' 'Slab: 1187988 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644472 kB' 'KernelStack: 22176 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8769964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218812 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.837 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.838 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43196064 kB' 'MemAvailable: 47179856 kB' 'Buffers: 6064 kB' 'Cached: 10830300 kB' 'SwapCached: 0 kB' 'Active: 7668044 kB' 'Inactive: 3689560 kB' 'Active(anon): 7269620 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524604 kB' 'Mapped: 208308 kB' 'Shmem: 6748380 kB' 'KReclaimable: 543516 kB' 'Slab: 1187956 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644440 kB' 'KernelStack: 22160 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8775500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.839 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.840 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.841 nr_hugepages=1024 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.841 resv_hugepages=0 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.841 surplus_hugepages=0 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.841 anon_hugepages=0 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43190036 kB' 'MemAvailable: 47173828 kB' 'Buffers: 6064 kB' 'Cached: 10830324 kB' 'SwapCached: 0 kB' 'Active: 7671452 kB' 'Inactive: 3689560 kB' 'Active(anon): 7273028 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527948 kB' 'Mapped: 208656 kB' 'Shmem: 6748404 kB' 'KReclaimable: 543516 kB' 'Slab: 1187948 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644432 kB' 'KernelStack: 22112 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8776372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218816 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.841 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.842 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26821756 kB' 'MemUsed: 5817384 kB' 'SwapCached: 0 kB' 'Active: 2897636 kB' 'Inactive: 231284 kB' 'Active(anon): 2764588 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812832 kB' 'Mapped: 89188 kB' 'AnonPages: 319204 kB' 'Shmem: 2448500 kB' 'KernelStack: 11816 kB' 'PageTables: 4960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215032 kB' 'Slab: 515280 kB' 'SReclaimable: 215032 kB' 'SUnreclaim: 300248 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.843 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.844 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16366312 kB' 'MemUsed: 11289768 kB' 'SwapCached: 0 kB' 'Active: 4767248 kB' 'Inactive: 3458276 kB' 'Active(anon): 4501872 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8023596 kB' 'Mapped: 118616 kB' 'AnonPages: 202148 kB' 'Shmem: 4299944 kB' 'KernelStack: 10312 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328484 kB' 'Slab: 672660 kB' 'SReclaimable: 328484 kB' 'SUnreclaim: 344176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.845 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.846 node0=512 expecting 512 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:12.846 node1=512 expecting 512 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.846 00:04:12.846 real 0m4.413s 00:04:12.846 user 0m1.633s 00:04:12.846 sys 0m2.851s 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.846 16:21:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.846 ************************************ 00:04:12.846 END TEST even_2G_alloc 00:04:12.846 ************************************ 00:04:12.846 16:21:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:12.846 16:21:09 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.846 16:21:09 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.846 16:21:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.846 ************************************ 00:04:12.846 START TEST odd_alloc 00:04:12.846 ************************************ 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.846 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.105 16:21:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:17.306 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.306 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.306 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.306 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.306 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.306 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.306 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.307 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43228964 kB' 'MemAvailable: 47212756 kB' 'Buffers: 6064 kB' 'Cached: 10830456 kB' 'SwapCached: 0 kB' 'Active: 7670136 kB' 'Inactive: 3689560 kB' 'Active(anon): 7271712 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526400 kB' 'Mapped: 208276 kB' 'Shmem: 6748536 kB' 'KReclaimable: 543516 kB' 'Slab: 1189292 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 645776 kB' 'KernelStack: 22160 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8774708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218928 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.307 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.308 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43232296 kB' 'MemAvailable: 47216088 kB' 'Buffers: 6064 kB' 'Cached: 10830460 kB' 'SwapCached: 0 kB' 'Active: 7663968 kB' 'Inactive: 3689560 kB' 'Active(anon): 7265544 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520300 kB' 'Mapped: 208172 kB' 'Shmem: 6748540 kB' 'KReclaimable: 543516 kB' 'Slab: 1189252 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 645736 kB' 'KernelStack: 22144 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8768604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.309 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.310 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43233456 kB' 'MemAvailable: 47217248 kB' 'Buffers: 6064 kB' 'Cached: 10830476 kB' 'SwapCached: 0 kB' 'Active: 7663968 kB' 'Inactive: 3689560 kB' 'Active(anon): 7265544 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520192 kB' 'Mapped: 207824 kB' 'Shmem: 6748556 kB' 'KReclaimable: 543516 kB' 'Slab: 1189308 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 645792 kB' 'KernelStack: 22144 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8768628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.311 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.312 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:17.313 nr_hugepages=1025 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.313 resv_hugepages=0 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.313 surplus_hugepages=0 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.313 anon_hugepages=0 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43233204 kB' 'MemAvailable: 47216996 kB' 'Buffers: 6064 kB' 'Cached: 10830492 kB' 'SwapCached: 0 kB' 'Active: 7663912 kB' 'Inactive: 3689560 kB' 'Active(anon): 7265488 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520120 kB' 'Mapped: 207824 kB' 'Shmem: 6748572 kB' 'KReclaimable: 543516 kB' 'Slab: 1189308 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 645792 kB' 'KernelStack: 22128 kB' 'PageTables: 8664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8768648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.313 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.314 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26857536 kB' 'MemUsed: 5781604 kB' 'SwapCached: 0 kB' 'Active: 2897092 kB' 'Inactive: 231284 kB' 'Active(anon): 2764044 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812888 kB' 'Mapped: 89208 kB' 'AnonPages: 318624 kB' 'Shmem: 2448556 kB' 'KernelStack: 11832 kB' 'PageTables: 5008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215032 kB' 'Slab: 516484 kB' 'SReclaimable: 215032 kB' 'SUnreclaim: 301452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.315 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.316 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16376500 kB' 'MemUsed: 11279580 kB' 'SwapCached: 0 kB' 'Active: 4766552 kB' 'Inactive: 3458276 kB' 'Active(anon): 4501176 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8023728 kB' 'Mapped: 118616 kB' 'AnonPages: 201168 kB' 'Shmem: 4300076 kB' 'KernelStack: 10296 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328484 kB' 'Slab: 672824 kB' 'SReclaimable: 328484 kB' 'SUnreclaim: 344340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.317 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:17.318 node0=512 expecting 513 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:17.318 node1=513 expecting 512 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:17.318 00:04:17.318 real 0m4.364s 00:04:17.318 user 0m1.642s 00:04:17.318 sys 0m2.805s 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.318 16:21:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.318 ************************************ 00:04:17.318 END TEST odd_alloc 00:04:17.318 ************************************ 00:04:17.318 16:21:14 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:17.319 16:21:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.319 16:21:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.319 16:21:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.319 ************************************ 00:04:17.319 START TEST custom_alloc 00:04:17.319 ************************************ 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.319 16:21:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:21.523 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.523 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42220724 kB' 'MemAvailable: 46204516 kB' 'Buffers: 6064 kB' 'Cached: 10830616 kB' 'SwapCached: 0 kB' 'Active: 7665076 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266652 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521084 kB' 'Mapped: 207848 kB' 'Shmem: 6748696 kB' 'KReclaimable: 543516 kB' 'Slab: 1188708 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 645192 kB' 'KernelStack: 22368 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8772124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219052 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.524 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42225104 kB' 'MemAvailable: 46208896 kB' 'Buffers: 6064 kB' 'Cached: 10830620 kB' 'SwapCached: 0 kB' 'Active: 7665540 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267116 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521612 kB' 'Mapped: 207848 kB' 'Shmem: 6748700 kB' 'KReclaimable: 543516 kB' 'Slab: 1188532 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 645016 kB' 'KernelStack: 22336 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8772140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218988 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.525 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.526 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.527 16:21:17 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42223040 kB' 'MemAvailable: 46206832 kB' 'Buffers: 6064 kB' 'Cached: 10830632 kB' 'SwapCached: 0 kB' 'Active: 7665100 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266676 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521224 kB' 'Mapped: 207832 kB' 'Shmem: 6748712 kB' 'KReclaimable: 543516 kB' 'Slab: 1188440 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644924 kB' 'KernelStack: 22160 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8768936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.527 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.528 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:21.529 nr_hugepages=1536 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.529 resv_hugepages=0 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.529 surplus_hugepages=0 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.529 anon_hugepages=0 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.529 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42221960 kB' 'MemAvailable: 46205752 kB' 'Buffers: 6064 kB' 'Cached: 10830656 kB' 'SwapCached: 0 kB' 'Active: 7664600 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266176 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520656 kB' 'Mapped: 207832 kB' 'Shmem: 6748736 kB' 'KReclaimable: 543516 kB' 'Slab: 1188440 kB' 'SReclaimable: 543516 kB' 'SUnreclaim: 644924 kB' 'KernelStack: 22144 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8768960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218828 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.530 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.531 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26871532 kB' 'MemUsed: 5767608 kB' 'SwapCached: 0 kB' 'Active: 2896656 kB' 'Inactive: 231284 kB' 'Active(anon): 2763608 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2812916 kB' 'Mapped: 89216 kB' 'AnonPages: 318188 kB' 'Shmem: 2448584 kB' 'KernelStack: 11848 kB' 'PageTables: 5060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215032 kB' 'Slab: 515516 kB' 'SReclaimable: 215032 kB' 'SUnreclaim: 300484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.532 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15350608 kB' 'MemUsed: 12305472 kB' 'SwapCached: 0 kB' 'Active: 4767804 kB' 'Inactive: 3458276 kB' 'Active(anon): 4502428 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8023844 kB' 'Mapped: 118616 kB' 'AnonPages: 202276 kB' 'Shmem: 4300192 kB' 'KernelStack: 10280 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 328484 kB' 'Slab: 672924 kB' 'SReclaimable: 328484 kB' 'SUnreclaim: 344440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.533 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:21.534 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.535 node0=512 expecting 512 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:21.535 node1=1024 expecting 1024 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:21.535 00:04:21.535 real 0m4.017s 00:04:21.535 user 0m1.364s 00:04:21.535 sys 0m2.673s 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.535 16:21:18 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.535 ************************************ 00:04:21.535 END TEST custom_alloc 00:04:21.535 ************************************ 00:04:21.535 16:21:18 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:21.535 16:21:18 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.535 16:21:18 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.535 16:21:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.535 ************************************ 00:04:21.535 START TEST no_shrink_alloc 00:04:21.535 ************************************ 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.535 16:21:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:25.730 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.730 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43269740 kB' 'MemAvailable: 47253468 kB' 'Buffers: 6064 kB' 'Cached: 10830884 kB' 'SwapCached: 0 kB' 'Active: 7665620 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267196 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521372 kB' 'Mapped: 206868 kB' 'Shmem: 6748964 kB' 'KReclaimable: 543452 kB' 'Slab: 1188284 kB' 'SReclaimable: 543452 kB' 'SUnreclaim: 644832 kB' 'KernelStack: 22256 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8738640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219004 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.730 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.731 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43269312 kB' 'MemAvailable: 47253008 kB' 'Buffers: 6064 kB' 'Cached: 10830888 kB' 'SwapCached: 0 kB' 'Active: 7665000 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266576 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520752 kB' 'Mapped: 206848 kB' 'Shmem: 6748968 kB' 'KReclaimable: 543420 kB' 'Slab: 1188308 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 644888 kB' 'KernelStack: 22064 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8737040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219004 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.732 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.733 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43268940 kB' 'MemAvailable: 47252636 kB' 'Buffers: 6064 kB' 'Cached: 10830908 kB' 'SwapCached: 0 kB' 'Active: 7665060 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266636 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520828 kB' 'Mapped: 206848 kB' 'Shmem: 6748988 kB' 'KReclaimable: 543420 kB' 'Slab: 1188436 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 645016 kB' 'KernelStack: 22208 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8738680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219020 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.734 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.735 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.736 nr_hugepages=1024 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.736 resv_hugepages=0 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.736 surplus_hugepages=0 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.736 anon_hugepages=0 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43267224 kB' 'MemAvailable: 47250920 kB' 'Buffers: 6064 kB' 'Cached: 10830924 kB' 'SwapCached: 0 kB' 'Active: 7665232 kB' 'Inactive: 3689560 kB' 'Active(anon): 7266808 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520956 kB' 'Mapped: 206848 kB' 'Shmem: 6749004 kB' 'KReclaimable: 543420 kB' 'Slab: 1188436 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 645016 kB' 'KernelStack: 22144 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8738700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218956 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.736 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.737 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25828016 kB' 'MemUsed: 6811124 kB' 'SwapCached: 0 kB' 'Active: 2899684 kB' 'Inactive: 231284 kB' 'Active(anon): 2766636 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2813068 kB' 'Mapped: 88500 kB' 'AnonPages: 321208 kB' 'Shmem: 2448736 kB' 'KernelStack: 12072 kB' 'PageTables: 5796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214936 kB' 'Slab: 515648 kB' 'SReclaimable: 214936 kB' 'SUnreclaim: 300712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.738 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.739 node0=1024 expecting 1024 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.739 16:21:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:29.936 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:29.936 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:29.936 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.936 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43277148 kB' 'MemAvailable: 47260844 kB' 'Buffers: 6064 kB' 'Cached: 10831340 kB' 'SwapCached: 0 kB' 'Active: 7666104 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267680 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521048 kB' 'Mapped: 206936 kB' 'Shmem: 6749420 kB' 'KReclaimable: 543420 kB' 'Slab: 1188852 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 645432 kB' 'KernelStack: 22080 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8736996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218844 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.937 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43278080 kB' 'MemAvailable: 47261776 kB' 'Buffers: 6064 kB' 'Cached: 10831344 kB' 'SwapCached: 0 kB' 'Active: 7665784 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267360 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521204 kB' 'Mapped: 206840 kB' 'Shmem: 6749424 kB' 'KReclaimable: 543420 kB' 'Slab: 1188848 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 645428 kB' 'KernelStack: 22080 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8737016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.938 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.939 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.940 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43277892 kB' 'MemAvailable: 47261588 kB' 'Buffers: 6064 kB' 'Cached: 10831360 kB' 'SwapCached: 0 kB' 'Active: 7665776 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267352 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521204 kB' 'Mapped: 206840 kB' 'Shmem: 6749440 kB' 'KReclaimable: 543420 kB' 'Slab: 1188848 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 645428 kB' 'KernelStack: 22080 kB' 'PageTables: 8612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8737036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.941 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.942 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.943 nr_hugepages=1024 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.943 resv_hugepages=0 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.943 surplus_hugepages=0 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.943 anon_hugepages=0 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43278368 kB' 'MemAvailable: 47262064 kB' 'Buffers: 6064 kB' 'Cached: 10831384 kB' 'SwapCached: 0 kB' 'Active: 7665956 kB' 'Inactive: 3689560 kB' 'Active(anon): 7267532 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521380 kB' 'Mapped: 206840 kB' 'Shmem: 6749464 kB' 'KReclaimable: 543420 kB' 'Slab: 1188848 kB' 'SReclaimable: 543420 kB' 'SUnreclaim: 645428 kB' 'KernelStack: 22064 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8739772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 109312 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3421556 kB' 'DirectMap2M: 19333120 kB' 'DirectMap1G: 46137344 kB' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.943 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.944 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25812180 kB' 'MemUsed: 6826960 kB' 'SwapCached: 0 kB' 'Active: 2900276 kB' 'Inactive: 231284 kB' 'Active(anon): 2767228 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2813420 kB' 'Mapped: 88508 kB' 'AnonPages: 321512 kB' 'Shmem: 2449088 kB' 'KernelStack: 11912 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214936 kB' 'Slab: 516128 kB' 'SReclaimable: 214936 kB' 'SUnreclaim: 301192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.945 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:29.946 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.947 node0=1024 expecting 1024 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.947 00:04:29.947 real 0m8.525s 00:04:29.947 user 0m3.128s 00:04:29.947 sys 0m5.476s 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.947 16:21:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.947 ************************************ 00:04:29.947 END TEST no_shrink_alloc 00:04:29.947 ************************************ 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.205 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.206 16:21:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.206 00:04:30.206 real 0m32.918s 00:04:30.206 user 0m11.297s 00:04:30.206 sys 0m20.106s 00:04:30.206 16:21:26 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.206 16:21:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.206 ************************************ 00:04:30.206 END TEST hugepages 00:04:30.206 ************************************ 00:04:30.206 16:21:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:30.206 16:21:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.206 16:21:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.206 16:21:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.206 ************************************ 00:04:30.206 START TEST driver 00:04:30.206 ************************************ 00:04:30.206 16:21:26 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:30.206 * Looking for test storage... 00:04:30.206 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:30.206 16:21:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:30.206 16:21:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.206 16:21:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.783 16:21:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:36.783 16:21:32 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.783 16:21:32 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.783 16:21:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:36.783 ************************************ 00:04:36.783 START TEST guess_driver 00:04:36.783 ************************************ 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:36.783 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:36.783 Looking for driver=vfio-pci 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.783 16:21:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.075 16:21:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.978 16:21:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:41.978 16:21:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:41.978 16:21:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.237 16:21:38 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:42.237 16:21:38 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:42.237 16:21:38 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.237 16:21:38 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.511 00:04:47.511 real 0m11.621s 00:04:47.511 user 0m3.075s 00:04:47.511 sys 0m5.910s 00:04:47.511 16:21:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.511 16:21:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.511 ************************************ 00:04:47.511 END TEST guess_driver 00:04:47.511 ************************************ 00:04:47.770 00:04:47.770 real 0m17.497s 00:04:47.770 user 0m4.800s 00:04:47.770 sys 0m9.239s 00:04:47.770 16:21:44 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.770 16:21:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.770 ************************************ 00:04:47.770 END TEST driver 00:04:47.770 ************************************ 00:04:47.770 16:21:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:47.770 16:21:44 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.770 16:21:44 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.770 16:21:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.770 ************************************ 00:04:47.770 START TEST devices 00:04:47.770 ************************************ 00:04:47.770 16:21:44 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:47.770 * Looking for test storage... 00:04:47.770 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:47.770 16:21:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:47.770 16:21:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:47.770 16:21:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.770 16:21:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:53.046 16:21:49 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:53.046 16:21:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:53.047 16:21:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:53.047 16:21:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:53.047 No valid GPT data, bailing 00:04:53.047 16:21:49 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:53.047 16:21:49 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:53.047 16:21:49 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:53.047 16:21:49 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:53.047 16:21:49 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:53.047 16:21:49 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:53.047 16:21:49 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:53.047 16:21:49 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.047 16:21:49 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.047 16:21:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.047 ************************************ 00:04:53.047 START TEST nvme_mount 00:04:53.047 ************************************ 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:53.047 16:21:49 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.615 Creating new GPT entries in memory. 00:04:53.615 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.615 other utilities. 00:04:53.615 16:21:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.615 16:21:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.615 16:21:50 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.615 16:21:50 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.615 16:21:50 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.552 Creating new GPT entries in memory. 00:04:54.552 The operation has completed successfully. 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1487267 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.552 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:54.811 16:21:51 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.811 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.811 16:21:51 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.004 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.004 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.004 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.004 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.004 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.004 16:21:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.345 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.346 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.604 16:21:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.794 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.794 00:05:06.794 real 0m14.265s 00:05:06.794 user 0m4.071s 00:05:06.794 sys 0m8.082s 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.794 16:22:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 ************************************ 00:05:06.794 END TEST nvme_mount 00:05:06.794 ************************************ 00:05:06.794 16:22:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:06.794 16:22:03 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.794 16:22:03 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.794 16:22:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:06.794 ************************************ 00:05:06.794 START TEST dm_mount 00:05:06.794 ************************************ 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:06.794 16:22:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:08.173 Creating new GPT entries in memory. 00:05:08.173 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.173 other utilities. 00:05:08.173 16:22:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.173 16:22:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.173 16:22:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.173 16:22:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.173 16:22:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:09.108 Creating new GPT entries in memory. 00:05:09.108 The operation has completed successfully. 00:05:09.108 16:22:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:09.108 16:22:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.108 16:22:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.108 16:22:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.108 16:22:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:10.044 The operation has completed successfully. 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1492432 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.044 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.045 16:22:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.232 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.233 16:22:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.424 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:18.425 16:22:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:18.425 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:18.425 00:05:18.425 real 0m11.567s 00:05:18.425 user 0m3.057s 00:05:18.425 sys 0m5.630s 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.425 16:22:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:18.425 ************************************ 00:05:18.425 END TEST dm_mount 00:05:18.425 ************************************ 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.425 16:22:15 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:18.684 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:18.684 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:18.684 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:18.684 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:18.684 16:22:15 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:18.684 00:05:18.684 real 0m31.036s 00:05:18.684 user 0m8.902s 00:05:18.684 sys 0m17.063s 00:05:18.684 16:22:15 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.684 16:22:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 ************************************ 00:05:18.684 END TEST devices 00:05:18.684 ************************************ 00:05:18.684 00:05:18.684 real 1m51.555s 00:05:18.684 user 0m34.357s 00:05:18.684 sys 1m5.032s 00:05:18.684 16:22:15 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.684 16:22:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 ************************************ 00:05:18.684 END TEST setup.sh 00:05:18.684 ************************************ 00:05:18.943 16:22:15 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:05:23.179 Hugepages 00:05:23.179 node hugesize free / total 00:05:23.179 node0 1048576kB 0 / 0 00:05:23.179 node0 2048kB 1024 / 1024 00:05:23.179 node1 1048576kB 0 / 0 00:05:23.179 node1 2048kB 1024 / 1024 00:05:23.179 00:05:23.179 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:23.179 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:23.180 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:23.180 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:23.180 16:22:19 -- spdk/autotest.sh@130 -- # uname -s 00:05:23.180 16:22:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:23.180 16:22:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:23.180 16:22:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:27.383 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:27.383 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:29.290 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:29.548 16:22:26 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:30.485 16:22:27 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:30.485 16:22:27 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:30.485 16:22:27 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:30.485 16:22:27 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:30.485 16:22:27 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:30.485 16:22:27 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:30.485 16:22:27 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:30.485 16:22:27 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:30.485 16:22:27 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:30.485 16:22:27 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:30.485 16:22:27 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:30.485 16:22:27 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:34.675 Waiting for block devices as requested 00:05:34.675 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:34.675 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:34.934 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:34.934 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:34.934 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:35.194 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:35.194 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:35.194 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:35.451 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:35.451 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:35.710 16:22:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:35.710 16:22:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:35.710 16:22:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:35.710 16:22:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:35.710 16:22:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:35.710 16:22:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:35.710 16:22:32 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:35.710 16:22:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:35.710 16:22:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:35.710 16:22:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:35.710 16:22:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:35.710 16:22:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:35.710 16:22:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:35.710 16:22:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:35.710 16:22:32 -- common/autotest_common.sh@1557 -- # continue 00:05:35.710 16:22:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:35.710 16:22:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:35.710 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.710 16:22:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:35.710 16:22:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.710 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.710 16:22:32 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:39.906 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:39.906 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:40.165 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:42.070 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:42.070 16:22:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:42.070 16:22:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.070 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.071 16:22:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:42.071 16:22:38 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:42.071 16:22:38 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.071 16:22:38 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:42.071 16:22:38 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:42.071 16:22:38 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:42.071 16:22:38 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:42.071 16:22:38 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:42.071 16:22:38 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.071 16:22:38 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:42.071 16:22:38 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:42.071 16:22:38 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:42.071 16:22:38 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:42.071 16:22:38 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:42.071 16:22:38 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:42.071 16:22:38 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:42.071 16:22:38 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:42.071 16:22:38 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:42.071 16:22:38 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:42.071 16:22:38 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:42.071 16:22:38 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1503709 00:05:42.071 16:22:38 -- common/autotest_common.sh@1598 -- # waitforlisten 1503709 00:05:42.071 16:22:38 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.071 16:22:38 -- common/autotest_common.sh@831 -- # '[' -z 1503709 ']' 00:05:42.071 16:22:38 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.071 16:22:38 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:42.071 16:22:38 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.071 16:22:38 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:42.071 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:42.329 [2024-07-24 16:22:38.996244] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:05:42.329 [2024-07-24 16:22:38.996366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503709 ] 00:05:42.588 [2024-07-24 16:22:39.199978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.847 [2024-07-24 16:22:39.487303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.254 16:22:40 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.254 16:22:40 -- common/autotest_common.sh@864 -- # return 0 00:05:44.254 16:22:40 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:44.254 16:22:40 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:44.254 16:22:40 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:47.542 nvme0n1 00:05:47.542 16:22:43 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:47.542 [2024-07-24 16:22:44.083036] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:47.542 request: 00:05:47.542 { 00:05:47.542 "nvme_ctrlr_name": "nvme0", 00:05:47.542 "password": "test", 00:05:47.542 "method": "bdev_nvme_opal_revert", 00:05:47.542 "req_id": 1 00:05:47.542 } 00:05:47.542 Got JSON-RPC error response 00:05:47.542 response: 00:05:47.542 { 00:05:47.542 "code": -32602, 00:05:47.542 "message": "Invalid parameters" 00:05:47.542 } 00:05:47.542 16:22:44 -- common/autotest_common.sh@1604 -- # true 00:05:47.542 16:22:44 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:47.542 16:22:44 -- common/autotest_common.sh@1608 -- # killprocess 1503709 00:05:47.542 16:22:44 -- common/autotest_common.sh@950 -- # '[' -z 1503709 ']' 00:05:47.542 16:22:44 -- common/autotest_common.sh@954 -- # kill -0 1503709 00:05:47.542 16:22:44 -- common/autotest_common.sh@955 -- # uname 00:05:47.542 16:22:44 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.542 16:22:44 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1503709 00:05:47.542 16:22:44 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.542 16:22:44 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.542 16:22:44 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1503709' 00:05:47.542 killing process with pid 1503709 00:05:47.542 16:22:44 -- common/autotest_common.sh@969 -- # kill 1503709 00:05:47.542 16:22:44 -- common/autotest_common.sh@974 -- # wait 1503709 00:05:52.815 16:22:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:52.815 16:22:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:52.815 16:22:49 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:05:52.815 16:22:49 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:05:52.815 16:22:49 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:05:53.074 Restarting all devices. 00:05:59.704 lstat() error: No such file or directory 00:05:59.704 QAT Error: No GENERAL section found 00:05:59.704 Failed to configure qat_dev0 00:05:59.704 lstat() error: No such file or directory 00:05:59.704 QAT Error: No GENERAL section found 00:05:59.704 Failed to configure qat_dev1 00:05:59.704 lstat() error: No such file or directory 00:05:59.704 QAT Error: No GENERAL section found 00:05:59.704 Failed to configure qat_dev2 00:05:59.704 lstat() error: No such file or directory 00:05:59.704 QAT Error: No GENERAL section found 00:05:59.704 Failed to configure qat_dev3 00:05:59.704 lstat() error: No such file or directory 00:05:59.704 QAT Error: No GENERAL section found 00:05:59.704 Failed to configure qat_dev4 00:05:59.704 enable sriov 00:05:59.704 Checking status of all devices. 00:05:59.704 There is 5 QAT acceleration device(s) in the system: 00:05:59.704 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:05:59.704 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:05:59.704 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:05:59.704 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:59.704 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:59.704 0000:1a:00.0 set to 16 VFs 00:06:00.641 0000:1c:00.0 set to 16 VFs 00:06:01.222 0000:1e:00.0 set to 16 VFs 00:06:02.160 0000:3d:00.0 set to 16 VFs 00:06:02.727 0000:3f:00.0 set to 16 VFs 00:06:05.260 Properly configured the qat device with driver uio_pci_generic. 00:06:05.260 16:23:01 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:05.260 16:23:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.260 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.260 16:23:01 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:05.260 16:23:01 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:05.260 16:23:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.260 16:23:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.260 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.260 ************************************ 00:06:05.260 START TEST env 00:06:05.260 ************************************ 00:06:05.260 16:23:02 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:05.260 * Looking for test storage... 00:06:05.260 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:06:05.260 16:23:02 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:05.260 16:23:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.260 16:23:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.260 16:23:02 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 ************************************ 00:06:05.519 START TEST env_memory 00:06:05.519 ************************************ 00:06:05.519 16:23:02 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:05.519 00:06:05.519 00:06:05.519 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.519 http://cunit.sourceforge.net/ 00:06:05.519 00:06:05.519 00:06:05.519 Suite: memory 00:06:05.519 Test: alloc and free memory map ...[2024-07-24 16:23:02.215651] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:05.519 passed 00:06:05.519 Test: mem map translation ...[2024-07-24 16:23:02.270280] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:05.519 [2024-07-24 16:23:02.270320] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:05.519 [2024-07-24 16:23:02.270403] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:05.519 [2024-07-24 16:23:02.270426] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:05.519 passed 00:06:05.519 Test: mem map registration ...[2024-07-24 16:23:02.356268] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:05.519 [2024-07-24 16:23:02.356306] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:05.778 passed 00:06:05.778 Test: mem map adjacent registrations ...passed 00:06:05.778 00:06:05.778 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.778 suites 1 1 n/a 0 0 00:06:05.778 tests 4 4 4 0 0 00:06:05.778 asserts 152 152 152 0 n/a 00:06:05.778 00:06:05.778 Elapsed time = 0.300 seconds 00:06:05.778 00:06:05.778 real 0m0.342s 00:06:05.778 user 0m0.309s 00:06:05.778 sys 0m0.032s 00:06:05.778 16:23:02 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.778 16:23:02 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:05.778 ************************************ 00:06:05.778 END TEST env_memory 00:06:05.778 ************************************ 00:06:05.778 16:23:02 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:05.778 16:23:02 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.778 16:23:02 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.778 16:23:02 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.778 ************************************ 00:06:05.778 START TEST env_vtophys 00:06:05.778 ************************************ 00:06:05.778 16:23:02 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:06.038 EAL: lib.eal log level changed from notice to debug 00:06:06.038 EAL: Detected lcore 0 as core 0 on socket 0 00:06:06.038 EAL: Detected lcore 1 as core 1 on socket 0 00:06:06.038 EAL: Detected lcore 2 as core 2 on socket 0 00:06:06.039 EAL: Detected lcore 3 as core 3 on socket 0 00:06:06.039 EAL: Detected lcore 4 as core 4 on socket 0 00:06:06.039 EAL: Detected lcore 5 as core 5 on socket 0 00:06:06.039 EAL: Detected lcore 6 as core 6 on socket 0 00:06:06.039 EAL: Detected lcore 7 as core 8 on socket 0 00:06:06.039 EAL: Detected lcore 8 as core 9 on socket 0 00:06:06.039 EAL: Detected lcore 9 as core 10 on socket 0 00:06:06.039 EAL: Detected lcore 10 as core 11 on socket 0 00:06:06.039 EAL: Detected lcore 11 as core 12 on socket 0 00:06:06.039 EAL: Detected lcore 12 as core 13 on socket 0 00:06:06.039 EAL: Detected lcore 13 as core 14 on socket 0 00:06:06.039 EAL: Detected lcore 14 as core 16 on socket 0 00:06:06.039 EAL: Detected lcore 15 as core 17 on socket 0 00:06:06.039 EAL: Detected lcore 16 as core 18 on socket 0 00:06:06.039 EAL: Detected lcore 17 as core 19 on socket 0 00:06:06.039 EAL: Detected lcore 18 as core 20 on socket 0 00:06:06.039 EAL: Detected lcore 19 as core 21 on socket 0 00:06:06.039 EAL: Detected lcore 20 as core 22 on socket 0 00:06:06.039 EAL: Detected lcore 21 as core 24 on socket 0 00:06:06.039 EAL: Detected lcore 22 as core 25 on socket 0 00:06:06.039 EAL: Detected lcore 23 as core 26 on socket 0 00:06:06.039 EAL: Detected lcore 24 as core 27 on socket 0 00:06:06.039 EAL: Detected lcore 25 as core 28 on socket 0 00:06:06.039 EAL: Detected lcore 26 as core 29 on socket 0 00:06:06.039 EAL: Detected lcore 27 as core 30 on socket 0 00:06:06.039 EAL: Detected lcore 28 as core 0 on socket 1 00:06:06.039 EAL: Detected lcore 29 as core 1 on socket 1 00:06:06.039 EAL: Detected lcore 30 as core 2 on socket 1 00:06:06.039 EAL: Detected lcore 31 as core 3 on socket 1 00:06:06.039 EAL: Detected lcore 32 as core 4 on socket 1 00:06:06.039 EAL: Detected lcore 33 as core 5 on socket 1 00:06:06.039 EAL: Detected lcore 34 as core 6 on socket 1 00:06:06.039 EAL: Detected lcore 35 as core 8 on socket 1 00:06:06.039 EAL: Detected lcore 36 as core 9 on socket 1 00:06:06.039 EAL: Detected lcore 37 as core 10 on socket 1 00:06:06.039 EAL: Detected lcore 38 as core 11 on socket 1 00:06:06.039 EAL: Detected lcore 39 as core 12 on socket 1 00:06:06.039 EAL: Detected lcore 40 as core 13 on socket 1 00:06:06.039 EAL: Detected lcore 41 as core 14 on socket 1 00:06:06.039 EAL: Detected lcore 42 as core 16 on socket 1 00:06:06.039 EAL: Detected lcore 43 as core 17 on socket 1 00:06:06.039 EAL: Detected lcore 44 as core 18 on socket 1 00:06:06.039 EAL: Detected lcore 45 as core 19 on socket 1 00:06:06.039 EAL: Detected lcore 46 as core 20 on socket 1 00:06:06.039 EAL: Detected lcore 47 as core 21 on socket 1 00:06:06.039 EAL: Detected lcore 48 as core 22 on socket 1 00:06:06.039 EAL: Detected lcore 49 as core 24 on socket 1 00:06:06.039 EAL: Detected lcore 50 as core 25 on socket 1 00:06:06.039 EAL: Detected lcore 51 as core 26 on socket 1 00:06:06.039 EAL: Detected lcore 52 as core 27 on socket 1 00:06:06.039 EAL: Detected lcore 53 as core 28 on socket 1 00:06:06.039 EAL: Detected lcore 54 as core 29 on socket 1 00:06:06.039 EAL: Detected lcore 55 as core 30 on socket 1 00:06:06.039 EAL: Detected lcore 56 as core 0 on socket 0 00:06:06.039 EAL: Detected lcore 57 as core 1 on socket 0 00:06:06.039 EAL: Detected lcore 58 as core 2 on socket 0 00:06:06.039 EAL: Detected lcore 59 as core 3 on socket 0 00:06:06.039 EAL: Detected lcore 60 as core 4 on socket 0 00:06:06.039 EAL: Detected lcore 61 as core 5 on socket 0 00:06:06.039 EAL: Detected lcore 62 as core 6 on socket 0 00:06:06.039 EAL: Detected lcore 63 as core 8 on socket 0 00:06:06.039 EAL: Detected lcore 64 as core 9 on socket 0 00:06:06.039 EAL: Detected lcore 65 as core 10 on socket 0 00:06:06.039 EAL: Detected lcore 66 as core 11 on socket 0 00:06:06.039 EAL: Detected lcore 67 as core 12 on socket 0 00:06:06.039 EAL: Detected lcore 68 as core 13 on socket 0 00:06:06.039 EAL: Detected lcore 69 as core 14 on socket 0 00:06:06.039 EAL: Detected lcore 70 as core 16 on socket 0 00:06:06.039 EAL: Detected lcore 71 as core 17 on socket 0 00:06:06.039 EAL: Detected lcore 72 as core 18 on socket 0 00:06:06.039 EAL: Detected lcore 73 as core 19 on socket 0 00:06:06.039 EAL: Detected lcore 74 as core 20 on socket 0 00:06:06.039 EAL: Detected lcore 75 as core 21 on socket 0 00:06:06.039 EAL: Detected lcore 76 as core 22 on socket 0 00:06:06.039 EAL: Detected lcore 77 as core 24 on socket 0 00:06:06.039 EAL: Detected lcore 78 as core 25 on socket 0 00:06:06.039 EAL: Detected lcore 79 as core 26 on socket 0 00:06:06.039 EAL: Detected lcore 80 as core 27 on socket 0 00:06:06.039 EAL: Detected lcore 81 as core 28 on socket 0 00:06:06.039 EAL: Detected lcore 82 as core 29 on socket 0 00:06:06.039 EAL: Detected lcore 83 as core 30 on socket 0 00:06:06.039 EAL: Detected lcore 84 as core 0 on socket 1 00:06:06.039 EAL: Detected lcore 85 as core 1 on socket 1 00:06:06.039 EAL: Detected lcore 86 as core 2 on socket 1 00:06:06.039 EAL: Detected lcore 87 as core 3 on socket 1 00:06:06.039 EAL: Detected lcore 88 as core 4 on socket 1 00:06:06.039 EAL: Detected lcore 89 as core 5 on socket 1 00:06:06.039 EAL: Detected lcore 90 as core 6 on socket 1 00:06:06.039 EAL: Detected lcore 91 as core 8 on socket 1 00:06:06.039 EAL: Detected lcore 92 as core 9 on socket 1 00:06:06.039 EAL: Detected lcore 93 as core 10 on socket 1 00:06:06.039 EAL: Detected lcore 94 as core 11 on socket 1 00:06:06.039 EAL: Detected lcore 95 as core 12 on socket 1 00:06:06.039 EAL: Detected lcore 96 as core 13 on socket 1 00:06:06.039 EAL: Detected lcore 97 as core 14 on socket 1 00:06:06.039 EAL: Detected lcore 98 as core 16 on socket 1 00:06:06.039 EAL: Detected lcore 99 as core 17 on socket 1 00:06:06.039 EAL: Detected lcore 100 as core 18 on socket 1 00:06:06.039 EAL: Detected lcore 101 as core 19 on socket 1 00:06:06.039 EAL: Detected lcore 102 as core 20 on socket 1 00:06:06.039 EAL: Detected lcore 103 as core 21 on socket 1 00:06:06.039 EAL: Detected lcore 104 as core 22 on socket 1 00:06:06.039 EAL: Detected lcore 105 as core 24 on socket 1 00:06:06.039 EAL: Detected lcore 106 as core 25 on socket 1 00:06:06.039 EAL: Detected lcore 107 as core 26 on socket 1 00:06:06.039 EAL: Detected lcore 108 as core 27 on socket 1 00:06:06.039 EAL: Detected lcore 109 as core 28 on socket 1 00:06:06.039 EAL: Detected lcore 110 as core 29 on socket 1 00:06:06.039 EAL: Detected lcore 111 as core 30 on socket 1 00:06:06.039 EAL: Maximum logical cores by configuration: 128 00:06:06.039 EAL: Detected CPU lcores: 112 00:06:06.039 EAL: Detected NUMA nodes: 2 00:06:06.039 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:06.039 EAL: Detected shared linkage of DPDK 00:06:06.039 EAL: No shared files mode enabled, IPC will be disabled 00:06:06.302 EAL: No shared files mode enabled, IPC is disabled 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:06:06.302 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:06:06.302 EAL: Bus pci wants IOVA as 'PA' 00:06:06.302 EAL: Bus auxiliary wants IOVA as 'DC' 00:06:06.302 EAL: Bus vdev wants IOVA as 'DC' 00:06:06.302 EAL: Selected IOVA mode 'PA' 00:06:06.302 EAL: Probing VFIO support... 00:06:06.302 EAL: IOMMU type 1 (Type 1) is supported 00:06:06.302 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:06.303 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:06.303 EAL: VFIO support initialized 00:06:06.303 EAL: Ask a virtual area of 0x2e000 bytes 00:06:06.303 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:06.303 EAL: Setting up physically contiguous memory... 00:06:06.303 EAL: Setting maximum number of open files to 524288 00:06:06.303 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:06.303 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:06.303 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:06.303 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:06.303 EAL: Ask a virtual area of 0x61000 bytes 00:06:06.303 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:06.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:06.303 EAL: Ask a virtual area of 0x400000000 bytes 00:06:06.303 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:06.303 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:06.303 EAL: Hugepages will be freed exactly as allocated. 00:06:06.303 EAL: No shared files mode enabled, IPC is disabled 00:06:06.303 EAL: No shared files mode enabled, IPC is disabled 00:06:06.303 EAL: TSC frequency is ~2500000 KHz 00:06:06.303 EAL: Main lcore 0 is ready (tid=7f54cd541b40;cpuset=[0]) 00:06:06.303 EAL: Trying to obtain current memory policy. 00:06:06.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.303 EAL: Restoring previous memory policy: 0 00:06:06.303 EAL: request: mp_malloc_sync 00:06:06.303 EAL: No shared files mode enabled, IPC is disabled 00:06:06.303 EAL: Heap on socket 0 was expanded by 2MB 00:06:06.303 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001000000 00:06:06.303 EAL: PCI memory mapped at 0x202001001000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001002000 00:06:06.303 EAL: PCI memory mapped at 0x202001003000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001004000 00:06:06.303 EAL: PCI memory mapped at 0x202001005000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001006000 00:06:06.303 EAL: PCI memory mapped at 0x202001007000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001008000 00:06:06.303 EAL: PCI memory mapped at 0x202001009000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x20200100a000 00:06:06.303 EAL: PCI memory mapped at 0x20200100b000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x20200100c000 00:06:06.303 EAL: PCI memory mapped at 0x20200100d000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x20200100e000 00:06:06.303 EAL: PCI memory mapped at 0x20200100f000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001010000 00:06:06.303 EAL: PCI memory mapped at 0x202001011000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001012000 00:06:06.303 EAL: PCI memory mapped at 0x202001013000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001014000 00:06:06.303 EAL: PCI memory mapped at 0x202001015000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001016000 00:06:06.303 EAL: PCI memory mapped at 0x202001017000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001018000 00:06:06.303 EAL: PCI memory mapped at 0x202001019000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x20200101a000 00:06:06.303 EAL: PCI memory mapped at 0x20200101b000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x20200101c000 00:06:06.303 EAL: PCI memory mapped at 0x20200101d000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:06.303 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x20200101e000 00:06:06.303 EAL: PCI memory mapped at 0x20200101f000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:06.303 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001020000 00:06:06.303 EAL: PCI memory mapped at 0x202001021000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:06.303 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001022000 00:06:06.303 EAL: PCI memory mapped at 0x202001023000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:06.303 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001024000 00:06:06.303 EAL: PCI memory mapped at 0x202001025000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:06.303 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001026000 00:06:06.303 EAL: PCI memory mapped at 0x202001027000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:06.303 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:06:06.303 EAL: probe driver: 8086:37c9 qat 00:06:06.303 EAL: PCI memory mapped at 0x202001028000 00:06:06.303 EAL: PCI memory mapped at 0x202001029000 00:06:06.303 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:06.303 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200102a000 00:06:06.304 EAL: PCI memory mapped at 0x20200102b000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200102c000 00:06:06.304 EAL: PCI memory mapped at 0x20200102d000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200102e000 00:06:06.304 EAL: PCI memory mapped at 0x20200102f000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001030000 00:06:06.304 EAL: PCI memory mapped at 0x202001031000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001032000 00:06:06.304 EAL: PCI memory mapped at 0x202001033000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001034000 00:06:06.304 EAL: PCI memory mapped at 0x202001035000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001036000 00:06:06.304 EAL: PCI memory mapped at 0x202001037000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001038000 00:06:06.304 EAL: PCI memory mapped at 0x202001039000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200103a000 00:06:06.304 EAL: PCI memory mapped at 0x20200103b000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200103c000 00:06:06.304 EAL: PCI memory mapped at 0x20200103d000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:06.304 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200103e000 00:06:06.304 EAL: PCI memory mapped at 0x20200103f000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001040000 00:06:06.304 EAL: PCI memory mapped at 0x202001041000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001042000 00:06:06.304 EAL: PCI memory mapped at 0x202001043000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001044000 00:06:06.304 EAL: PCI memory mapped at 0x202001045000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001046000 00:06:06.304 EAL: PCI memory mapped at 0x202001047000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001048000 00:06:06.304 EAL: PCI memory mapped at 0x202001049000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200104a000 00:06:06.304 EAL: PCI memory mapped at 0x20200104b000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200104c000 00:06:06.304 EAL: PCI memory mapped at 0x20200104d000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200104e000 00:06:06.304 EAL: PCI memory mapped at 0x20200104f000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001050000 00:06:06.304 EAL: PCI memory mapped at 0x202001051000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001052000 00:06:06.304 EAL: PCI memory mapped at 0x202001053000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001054000 00:06:06.304 EAL: PCI memory mapped at 0x202001055000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001056000 00:06:06.304 EAL: PCI memory mapped at 0x202001057000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001058000 00:06:06.304 EAL: PCI memory mapped at 0x202001059000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200105a000 00:06:06.304 EAL: PCI memory mapped at 0x20200105b000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200105c000 00:06:06.304 EAL: PCI memory mapped at 0x20200105d000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:06.304 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x20200105e000 00:06:06.304 EAL: PCI memory mapped at 0x20200105f000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:06.304 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001060000 00:06:06.304 EAL: PCI memory mapped at 0x202001061000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:06.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.304 EAL: PCI memory unmapped at 0x202001060000 00:06:06.304 EAL: PCI memory unmapped at 0x202001061000 00:06:06.304 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:06.304 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001062000 00:06:06.304 EAL: PCI memory mapped at 0x202001063000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:06.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.304 EAL: PCI memory unmapped at 0x202001062000 00:06:06.304 EAL: PCI memory unmapped at 0x202001063000 00:06:06.304 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:06.304 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001064000 00:06:06.304 EAL: PCI memory mapped at 0x202001065000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:06.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.304 EAL: PCI memory unmapped at 0x202001064000 00:06:06.304 EAL: PCI memory unmapped at 0x202001065000 00:06:06.304 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:06.304 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001066000 00:06:06.304 EAL: PCI memory mapped at 0x202001067000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:06.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.304 EAL: PCI memory unmapped at 0x202001066000 00:06:06.304 EAL: PCI memory unmapped at 0x202001067000 00:06:06.304 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:06.304 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:06:06.304 EAL: probe driver: 8086:37c9 qat 00:06:06.304 EAL: PCI memory mapped at 0x202001068000 00:06:06.304 EAL: PCI memory mapped at 0x202001069000 00:06:06.304 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:06.304 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.304 EAL: PCI memory unmapped at 0x202001068000 00:06:06.304 EAL: PCI memory unmapped at 0x202001069000 00:06:06.304 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:06.304 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200106a000 00:06:06.305 EAL: PCI memory mapped at 0x20200106b000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200106a000 00:06:06.305 EAL: PCI memory unmapped at 0x20200106b000 00:06:06.305 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200106c000 00:06:06.305 EAL: PCI memory mapped at 0x20200106d000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200106c000 00:06:06.305 EAL: PCI memory unmapped at 0x20200106d000 00:06:06.305 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200106e000 00:06:06.305 EAL: PCI memory mapped at 0x20200106f000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200106e000 00:06:06.305 EAL: PCI memory unmapped at 0x20200106f000 00:06:06.305 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001070000 00:06:06.305 EAL: PCI memory mapped at 0x202001071000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001070000 00:06:06.305 EAL: PCI memory unmapped at 0x202001071000 00:06:06.305 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001072000 00:06:06.305 EAL: PCI memory mapped at 0x202001073000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001072000 00:06:06.305 EAL: PCI memory unmapped at 0x202001073000 00:06:06.305 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001074000 00:06:06.305 EAL: PCI memory mapped at 0x202001075000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001074000 00:06:06.305 EAL: PCI memory unmapped at 0x202001075000 00:06:06.305 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001076000 00:06:06.305 EAL: PCI memory mapped at 0x202001077000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001076000 00:06:06.305 EAL: PCI memory unmapped at 0x202001077000 00:06:06.305 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001078000 00:06:06.305 EAL: PCI memory mapped at 0x202001079000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001078000 00:06:06.305 EAL: PCI memory unmapped at 0x202001079000 00:06:06.305 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200107a000 00:06:06.305 EAL: PCI memory mapped at 0x20200107b000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200107a000 00:06:06.305 EAL: PCI memory unmapped at 0x20200107b000 00:06:06.305 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200107c000 00:06:06.305 EAL: PCI memory mapped at 0x20200107d000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200107c000 00:06:06.305 EAL: PCI memory unmapped at 0x20200107d000 00:06:06.305 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:06.305 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200107e000 00:06:06.305 EAL: PCI memory mapped at 0x20200107f000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200107e000 00:06:06.305 EAL: PCI memory unmapped at 0x20200107f000 00:06:06.305 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001080000 00:06:06.305 EAL: PCI memory mapped at 0x202001081000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001080000 00:06:06.305 EAL: PCI memory unmapped at 0x202001081000 00:06:06.305 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001082000 00:06:06.305 EAL: PCI memory mapped at 0x202001083000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001082000 00:06:06.305 EAL: PCI memory unmapped at 0x202001083000 00:06:06.305 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001084000 00:06:06.305 EAL: PCI memory mapped at 0x202001085000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001084000 00:06:06.305 EAL: PCI memory unmapped at 0x202001085000 00:06:06.305 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001086000 00:06:06.305 EAL: PCI memory mapped at 0x202001087000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001086000 00:06:06.305 EAL: PCI memory unmapped at 0x202001087000 00:06:06.305 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001088000 00:06:06.305 EAL: PCI memory mapped at 0x202001089000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x202001088000 00:06:06.305 EAL: PCI memory unmapped at 0x202001089000 00:06:06.305 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200108a000 00:06:06.305 EAL: PCI memory mapped at 0x20200108b000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200108a000 00:06:06.305 EAL: PCI memory unmapped at 0x20200108b000 00:06:06.305 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200108c000 00:06:06.305 EAL: PCI memory mapped at 0x20200108d000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200108c000 00:06:06.305 EAL: PCI memory unmapped at 0x20200108d000 00:06:06.305 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x20200108e000 00:06:06.305 EAL: PCI memory mapped at 0x20200108f000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.305 EAL: PCI memory unmapped at 0x20200108e000 00:06:06.305 EAL: PCI memory unmapped at 0x20200108f000 00:06:06.305 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:06.305 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:06:06.305 EAL: probe driver: 8086:37c9 qat 00:06:06.305 EAL: PCI memory mapped at 0x202001090000 00:06:06.305 EAL: PCI memory mapped at 0x202001091000 00:06:06.305 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:06.305 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x202001090000 00:06:06.306 EAL: PCI memory unmapped at 0x202001091000 00:06:06.306 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x202001092000 00:06:06.306 EAL: PCI memory mapped at 0x202001093000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x202001092000 00:06:06.306 EAL: PCI memory unmapped at 0x202001093000 00:06:06.306 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x202001094000 00:06:06.306 EAL: PCI memory mapped at 0x202001095000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x202001094000 00:06:06.306 EAL: PCI memory unmapped at 0x202001095000 00:06:06.306 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x202001096000 00:06:06.306 EAL: PCI memory mapped at 0x202001097000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x202001096000 00:06:06.306 EAL: PCI memory unmapped at 0x202001097000 00:06:06.306 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x202001098000 00:06:06.306 EAL: PCI memory mapped at 0x202001099000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x202001098000 00:06:06.306 EAL: PCI memory unmapped at 0x202001099000 00:06:06.306 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x20200109a000 00:06:06.306 EAL: PCI memory mapped at 0x20200109b000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x20200109a000 00:06:06.306 EAL: PCI memory unmapped at 0x20200109b000 00:06:06.306 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x20200109c000 00:06:06.306 EAL: PCI memory mapped at 0x20200109d000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x20200109c000 00:06:06.306 EAL: PCI memory unmapped at 0x20200109d000 00:06:06.306 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:06.306 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:06:06.306 EAL: probe driver: 8086:37c9 qat 00:06:06.306 EAL: PCI memory mapped at 0x20200109e000 00:06:06.306 EAL: PCI memory mapped at 0x20200109f000 00:06:06.306 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:06.306 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:06.306 EAL: PCI memory unmapped at 0x20200109e000 00:06:06.306 EAL: PCI memory unmapped at 0x20200109f000 00:06:06.306 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:06.306 EAL: No shared files mode enabled, IPC is disabled 00:06:06.306 EAL: No shared files mode enabled, IPC is disabled 00:06:06.306 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:06.306 EAL: Mem event callback 'spdk:(nil)' registered 00:06:06.306 00:06:06.306 00:06:06.306 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.306 http://cunit.sourceforge.net/ 00:06:06.306 00:06:06.306 00:06:06.306 Suite: components_suite 00:06:06.875 Test: vtophys_malloc_test ...passed 00:06:06.875 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:06.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.875 EAL: Restoring previous memory policy: 4 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was expanded by 4MB 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was shrunk by 4MB 00:06:06.875 EAL: Trying to obtain current memory policy. 00:06:06.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.875 EAL: Restoring previous memory policy: 4 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was expanded by 6MB 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was shrunk by 6MB 00:06:06.875 EAL: Trying to obtain current memory policy. 00:06:06.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.875 EAL: Restoring previous memory policy: 4 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was expanded by 10MB 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was shrunk by 10MB 00:06:06.875 EAL: Trying to obtain current memory policy. 00:06:06.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.875 EAL: Restoring previous memory policy: 4 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was expanded by 18MB 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was shrunk by 18MB 00:06:06.875 EAL: Trying to obtain current memory policy. 00:06:06.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.875 EAL: Restoring previous memory policy: 4 00:06:06.875 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.875 EAL: request: mp_malloc_sync 00:06:06.875 EAL: No shared files mode enabled, IPC is disabled 00:06:06.875 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.133 EAL: request: mp_malloc_sync 00:06:07.133 EAL: No shared files mode enabled, IPC is disabled 00:06:07.133 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.133 EAL: Trying to obtain current memory policy. 00:06:07.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.133 EAL: Restoring previous memory policy: 4 00:06:07.133 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.133 EAL: request: mp_malloc_sync 00:06:07.133 EAL: No shared files mode enabled, IPC is disabled 00:06:07.133 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.391 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.391 EAL: request: mp_malloc_sync 00:06:07.391 EAL: No shared files mode enabled, IPC is disabled 00:06:07.391 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.391 EAL: Trying to obtain current memory policy. 00:06:07.391 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.392 EAL: Restoring previous memory policy: 4 00:06:07.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.392 EAL: request: mp_malloc_sync 00:06:07.392 EAL: No shared files mode enabled, IPC is disabled 00:06:07.392 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.960 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.960 EAL: request: mp_malloc_sync 00:06:07.960 EAL: No shared files mode enabled, IPC is disabled 00:06:07.960 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.960 EAL: Trying to obtain current memory policy. 00:06:07.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.219 EAL: Restoring previous memory policy: 4 00:06:08.219 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.219 EAL: request: mp_malloc_sync 00:06:08.219 EAL: No shared files mode enabled, IPC is disabled 00:06:08.219 EAL: Heap on socket 0 was expanded by 258MB 00:06:08.787 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.046 EAL: request: mp_malloc_sync 00:06:09.046 EAL: No shared files mode enabled, IPC is disabled 00:06:09.046 EAL: Heap on socket 0 was shrunk by 258MB 00:06:09.613 EAL: Trying to obtain current memory policy. 00:06:09.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.613 EAL: Restoring previous memory policy: 4 00:06:09.613 EAL: Calling mem event callback 'spdk:(nil)' 00:06:09.613 EAL: request: mp_malloc_sync 00:06:09.613 EAL: No shared files mode enabled, IPC is disabled 00:06:09.613 EAL: Heap on socket 0 was expanded by 514MB 00:06:10.987 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.987 EAL: request: mp_malloc_sync 00:06:10.987 EAL: No shared files mode enabled, IPC is disabled 00:06:10.987 EAL: Heap on socket 0 was shrunk by 514MB 00:06:12.364 EAL: Trying to obtain current memory policy. 00:06:12.364 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.364 EAL: Restoring previous memory policy: 4 00:06:12.364 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.364 EAL: request: mp_malloc_sync 00:06:12.364 EAL: No shared files mode enabled, IPC is disabled 00:06:12.364 EAL: Heap on socket 0 was expanded by 1026MB 00:06:15.651 EAL: Calling mem event callback 'spdk:(nil)' 00:06:15.651 EAL: request: mp_malloc_sync 00:06:15.651 EAL: No shared files mode enabled, IPC is disabled 00:06:15.651 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:17.554 passed 00:06:17.554 00:06:17.554 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.554 suites 1 1 n/a 0 0 00:06:17.554 tests 2 2 2 0 0 00:06:17.554 asserts 6384 6384 6384 0 n/a 00:06:17.554 00:06:17.554 Elapsed time = 11.138 seconds 00:06:17.554 EAL: No shared files mode enabled, IPC is disabled 00:06:17.554 EAL: No shared files mode enabled, IPC is disabled 00:06:17.554 EAL: No shared files mode enabled, IPC is disabled 00:06:17.554 00:06:17.554 real 0m11.725s 00:06:17.554 user 0m10.505s 00:06:17.554 sys 0m1.139s 00:06:17.554 16:23:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.554 16:23:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:17.554 ************************************ 00:06:17.554 END TEST env_vtophys 00:06:17.554 ************************************ 00:06:17.554 16:23:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:17.554 16:23:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.554 16:23:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.554 16:23:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.554 ************************************ 00:06:17.554 START TEST env_pci 00:06:17.554 ************************************ 00:06:17.554 16:23:14 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:17.812 00:06:17.812 00:06:17.812 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.812 http://cunit.sourceforge.net/ 00:06:17.812 00:06:17.812 00:06:17.812 Suite: pci 00:06:17.812 Test: pci_hook ...[2024-07-24 16:23:14.421732] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1510020 has claimed it 00:06:17.812 EAL: Cannot find device (10000:00:01.0) 00:06:17.812 EAL: Failed to attach device on primary process 00:06:17.812 passed 00:06:17.812 00:06:17.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.812 suites 1 1 n/a 0 0 00:06:17.812 tests 1 1 1 0 0 00:06:17.812 asserts 25 25 25 0 n/a 00:06:17.812 00:06:17.812 Elapsed time = 0.085 seconds 00:06:17.812 00:06:17.812 real 0m0.191s 00:06:17.812 user 0m0.074s 00:06:17.812 sys 0m0.115s 00:06:17.812 16:23:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.812 16:23:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:17.812 ************************************ 00:06:17.812 END TEST env_pci 00:06:17.812 ************************************ 00:06:17.812 16:23:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:17.812 16:23:14 env -- env/env.sh@15 -- # uname 00:06:17.812 16:23:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:17.812 16:23:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:17.812 16:23:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.812 16:23:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:17.812 16:23:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.812 16:23:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.812 ************************************ 00:06:17.812 START TEST env_dpdk_post_init 00:06:17.812 ************************************ 00:06:17.812 16:23:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:18.072 EAL: Detected CPU lcores: 112 00:06:18.073 EAL: Detected NUMA nodes: 2 00:06:18.073 EAL: Detected shared linkage of DPDK 00:06:18.073 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.073 EAL: Selected IOVA mode 'PA' 00:06:18.073 EAL: VFIO support initialized 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:18.073 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.073 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:18.073 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:18.074 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:18.074 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:18.074 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.074 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:18.074 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.074 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:18.074 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.074 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:18.074 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:18.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:18.075 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.075 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:18.075 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.334 EAL: Using IOMMU type 1 (Type 1) 00:06:18.334 EAL: Ignore mapping IO port bar(1) 00:06:18.334 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:18.334 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:18.335 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:18.335 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:18.335 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:18.335 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:18.335 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:18.335 EAL: Ignore mapping IO port bar(1) 00:06:18.335 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:18.335 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:18.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:18.335 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:18.594 EAL: Ignore mapping IO port bar(1) 00:06:18.594 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:19.530 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:23.716 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:23.716 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:06:23.716 Starting DPDK initialization... 00:06:23.716 Starting SPDK post initialization... 00:06:23.716 SPDK NVMe probe 00:06:23.716 Attaching to 0000:d8:00.0 00:06:23.716 Attached to 0000:d8:00.0 00:06:23.716 Cleaning up... 00:06:23.716 00:06:23.716 real 0m5.691s 00:06:23.716 user 0m4.208s 00:06:23.716 sys 0m0.536s 00:06:23.716 16:23:20 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.716 16:23:20 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.716 ************************************ 00:06:23.716 END TEST env_dpdk_post_init 00:06:23.716 ************************************ 00:06:23.716 16:23:20 env -- env/env.sh@26 -- # uname 00:06:23.716 16:23:20 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:23.716 16:23:20 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:23.716 16:23:20 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.716 16:23:20 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.716 16:23:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.716 ************************************ 00:06:23.716 START TEST env_mem_callbacks 00:06:23.716 ************************************ 00:06:23.716 16:23:20 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:23.716 EAL: Detected CPU lcores: 112 00:06:23.716 EAL: Detected NUMA nodes: 2 00:06:23.716 EAL: Detected shared linkage of DPDK 00:06:23.716 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:24.075 EAL: Selected IOVA mode 'PA' 00:06:24.075 EAL: VFIO support initialized 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.075 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.075 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:24.075 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.076 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:24.076 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:24.076 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.077 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:24.077 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:24.077 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:24.077 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:24.077 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:24.077 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:24.077 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:24.077 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.077 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:24.077 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:24.077 00:06:24.077 00:06:24.077 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.077 http://cunit.sourceforge.net/ 00:06:24.077 00:06:24.077 00:06:24.077 Suite: memory 00:06:24.077 Test: test ... 00:06:24.077 register 0x200000200000 2097152 00:06:24.077 malloc 3145728 00:06:24.077 register 0x200000400000 4194304 00:06:24.077 buf 0x2000004fffc0 len 3145728 PASSED 00:06:24.077 malloc 64 00:06:24.077 buf 0x2000004ffec0 len 64 PASSED 00:06:24.077 malloc 4194304 00:06:24.077 register 0x200000800000 6291456 00:06:24.077 buf 0x2000009fffc0 len 4194304 PASSED 00:06:24.077 free 0x2000004fffc0 3145728 00:06:24.077 free 0x2000004ffec0 64 00:06:24.077 unregister 0x200000400000 4194304 PASSED 00:06:24.077 free 0x2000009fffc0 4194304 00:06:24.077 unregister 0x200000800000 6291456 PASSED 00:06:24.077 malloc 8388608 00:06:24.077 register 0x200000400000 10485760 00:06:24.077 buf 0x2000005fffc0 len 8388608 PASSED 00:06:24.077 free 0x2000005fffc0 8388608 00:06:24.077 unregister 0x200000400000 10485760 PASSED 00:06:24.077 passed 00:06:24.077 00:06:24.077 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.077 suites 1 1 n/a 0 0 00:06:24.077 tests 1 1 1 0 0 00:06:24.077 asserts 15 15 15 0 n/a 00:06:24.077 00:06:24.077 Elapsed time = 0.089 seconds 00:06:24.077 00:06:24.077 real 0m0.300s 00:06:24.077 user 0m0.154s 00:06:24.077 sys 0m0.144s 00:06:24.077 16:23:20 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.077 16:23:20 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:24.077 ************************************ 00:06:24.077 END TEST env_mem_callbacks 00:06:24.077 ************************************ 00:06:24.077 00:06:24.077 real 0m18.770s 00:06:24.077 user 0m15.447s 00:06:24.077 sys 0m2.336s 00:06:24.077 16:23:20 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.077 16:23:20 env -- common/autotest_common.sh@10 -- # set +x 00:06:24.077 ************************************ 00:06:24.077 END TEST env 00:06:24.077 ************************************ 00:06:24.077 16:23:20 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:24.077 16:23:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.077 16:23:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.077 16:23:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.077 ************************************ 00:06:24.077 START TEST rpc 00:06:24.077 ************************************ 00:06:24.077 16:23:20 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:24.336 * Looking for test storage... 00:06:24.336 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:24.336 16:23:20 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1511326 00:06:24.336 16:23:20 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.336 16:23:20 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:24.336 16:23:20 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1511326 00:06:24.336 16:23:20 rpc -- common/autotest_common.sh@831 -- # '[' -z 1511326 ']' 00:06:24.336 16:23:20 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.336 16:23:20 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.336 16:23:20 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.336 16:23:20 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.336 16:23:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.336 [2024-07-24 16:23:21.094324] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:06:24.336 [2024-07-24 16:23:21.094441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511326 ] 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:24.595 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:24.595 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:24.596 [2024-07-24 16:23:21.323988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.854 [2024-07-24 16:23:21.607881] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:24.854 [2024-07-24 16:23:21.607939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1511326' to capture a snapshot of events at runtime. 00:06:24.854 [2024-07-24 16:23:21.607957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:24.854 [2024-07-24 16:23:21.607975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:24.854 [2024-07-24 16:23:21.607989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1511326 for offline analysis/debug. 00:06:24.854 [2024-07-24 16:23:21.608045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.231 16:23:22 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.231 16:23:22 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.231 16:23:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:26.231 16:23:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:26.231 16:23:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:26.232 16:23:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:26.232 16:23:22 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.232 16:23:22 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.232 16:23:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.232 ************************************ 00:06:26.232 START TEST rpc_integrity 00:06:26.232 ************************************ 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.232 16:23:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:26.232 { 00:06:26.232 "name": "Malloc0", 00:06:26.232 "aliases": [ 00:06:26.232 "32388183-8054-4a75-bf07-cd9f6721d5e4" 00:06:26.232 ], 00:06:26.232 "product_name": "Malloc disk", 00:06:26.232 "block_size": 512, 00:06:26.232 "num_blocks": 16384, 00:06:26.232 "uuid": "32388183-8054-4a75-bf07-cd9f6721d5e4", 00:06:26.232 "assigned_rate_limits": { 00:06:26.232 "rw_ios_per_sec": 0, 00:06:26.232 "rw_mbytes_per_sec": 0, 00:06:26.232 "r_mbytes_per_sec": 0, 00:06:26.232 "w_mbytes_per_sec": 0 00:06:26.232 }, 00:06:26.232 "claimed": false, 00:06:26.232 "zoned": false, 00:06:26.232 "supported_io_types": { 00:06:26.232 "read": true, 00:06:26.232 "write": true, 00:06:26.232 "unmap": true, 00:06:26.232 "flush": true, 00:06:26.232 "reset": true, 00:06:26.232 "nvme_admin": false, 00:06:26.232 "nvme_io": false, 00:06:26.232 "nvme_io_md": false, 00:06:26.232 "write_zeroes": true, 00:06:26.232 "zcopy": true, 00:06:26.232 "get_zone_info": false, 00:06:26.232 "zone_management": false, 00:06:26.232 "zone_append": false, 00:06:26.232 "compare": false, 00:06:26.232 "compare_and_write": false, 00:06:26.232 "abort": true, 00:06:26.232 "seek_hole": false, 00:06:26.232 "seek_data": false, 00:06:26.232 "copy": true, 00:06:26.232 "nvme_iov_md": false 00:06:26.232 }, 00:06:26.232 "memory_domains": [ 00:06:26.232 { 00:06:26.232 "dma_device_id": "system", 00:06:26.232 "dma_device_type": 1 00:06:26.232 }, 00:06:26.232 { 00:06:26.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.232 "dma_device_type": 2 00:06:26.232 } 00:06:26.232 ], 00:06:26.232 "driver_specific": {} 00:06:26.232 } 00:06:26.232 ]' 00:06:26.232 16:23:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.232 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.232 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:26.232 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.232 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.232 [2024-07-24 16:23:23.033269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:26.232 [2024-07-24 16:23:23.033342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.232 [2024-07-24 16:23:23.033372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003fc80 00:06:26.232 [2024-07-24 16:23:23.033392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.232 [2024-07-24 16:23:23.036182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.232 [2024-07-24 16:23:23.036237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.232 Passthru0 00:06:26.232 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.232 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.232 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.232 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.232 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.232 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.232 { 00:06:26.232 "name": "Malloc0", 00:06:26.232 "aliases": [ 00:06:26.232 "32388183-8054-4a75-bf07-cd9f6721d5e4" 00:06:26.232 ], 00:06:26.232 "product_name": "Malloc disk", 00:06:26.232 "block_size": 512, 00:06:26.232 "num_blocks": 16384, 00:06:26.232 "uuid": "32388183-8054-4a75-bf07-cd9f6721d5e4", 00:06:26.232 "assigned_rate_limits": { 00:06:26.232 "rw_ios_per_sec": 0, 00:06:26.232 "rw_mbytes_per_sec": 0, 00:06:26.232 "r_mbytes_per_sec": 0, 00:06:26.232 "w_mbytes_per_sec": 0 00:06:26.232 }, 00:06:26.232 "claimed": true, 00:06:26.232 "claim_type": "exclusive_write", 00:06:26.232 "zoned": false, 00:06:26.232 "supported_io_types": { 00:06:26.232 "read": true, 00:06:26.232 "write": true, 00:06:26.232 "unmap": true, 00:06:26.232 "flush": true, 00:06:26.232 "reset": true, 00:06:26.232 "nvme_admin": false, 00:06:26.232 "nvme_io": false, 00:06:26.232 "nvme_io_md": false, 00:06:26.232 "write_zeroes": true, 00:06:26.232 "zcopy": true, 00:06:26.232 "get_zone_info": false, 00:06:26.232 "zone_management": false, 00:06:26.232 "zone_append": false, 00:06:26.232 "compare": false, 00:06:26.232 "compare_and_write": false, 00:06:26.232 "abort": true, 00:06:26.232 "seek_hole": false, 00:06:26.232 "seek_data": false, 00:06:26.232 "copy": true, 00:06:26.232 "nvme_iov_md": false 00:06:26.232 }, 00:06:26.232 "memory_domains": [ 00:06:26.232 { 00:06:26.232 "dma_device_id": "system", 00:06:26.232 "dma_device_type": 1 00:06:26.232 }, 00:06:26.232 { 00:06:26.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.232 "dma_device_type": 2 00:06:26.232 } 00:06:26.232 ], 00:06:26.232 "driver_specific": {} 00:06:26.232 }, 00:06:26.232 { 00:06:26.232 "name": "Passthru0", 00:06:26.232 "aliases": [ 00:06:26.232 "fa5baf76-83d1-557f-b42d-1ec2565ccc47" 00:06:26.232 ], 00:06:26.232 "product_name": "passthru", 00:06:26.232 "block_size": 512, 00:06:26.232 "num_blocks": 16384, 00:06:26.232 "uuid": "fa5baf76-83d1-557f-b42d-1ec2565ccc47", 00:06:26.232 "assigned_rate_limits": { 00:06:26.232 "rw_ios_per_sec": 0, 00:06:26.232 "rw_mbytes_per_sec": 0, 00:06:26.232 "r_mbytes_per_sec": 0, 00:06:26.232 "w_mbytes_per_sec": 0 00:06:26.232 }, 00:06:26.232 "claimed": false, 00:06:26.232 "zoned": false, 00:06:26.232 "supported_io_types": { 00:06:26.232 "read": true, 00:06:26.232 "write": true, 00:06:26.232 "unmap": true, 00:06:26.232 "flush": true, 00:06:26.232 "reset": true, 00:06:26.232 "nvme_admin": false, 00:06:26.232 "nvme_io": false, 00:06:26.232 "nvme_io_md": false, 00:06:26.232 "write_zeroes": true, 00:06:26.232 "zcopy": true, 00:06:26.233 "get_zone_info": false, 00:06:26.233 "zone_management": false, 00:06:26.233 "zone_append": false, 00:06:26.233 "compare": false, 00:06:26.233 "compare_and_write": false, 00:06:26.233 "abort": true, 00:06:26.233 "seek_hole": false, 00:06:26.233 "seek_data": false, 00:06:26.233 "copy": true, 00:06:26.233 "nvme_iov_md": false 00:06:26.233 }, 00:06:26.233 "memory_domains": [ 00:06:26.233 { 00:06:26.233 "dma_device_id": "system", 00:06:26.233 "dma_device_type": 1 00:06:26.233 }, 00:06:26.233 { 00:06:26.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.233 "dma_device_type": 2 00:06:26.233 } 00:06:26.233 ], 00:06:26.233 "driver_specific": { 00:06:26.233 "passthru": { 00:06:26.233 "name": "Passthru0", 00:06:26.233 "base_bdev_name": "Malloc0" 00:06:26.233 } 00:06:26.233 } 00:06:26.233 } 00:06:26.233 ]' 00:06:26.233 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.491 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.491 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.491 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.491 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.491 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.492 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.492 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.492 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:26.492 16:23:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:26.492 00:06:26.492 real 0m0.320s 00:06:26.492 user 0m0.184s 00:06:26.492 sys 0m0.059s 00:06:26.492 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.492 16:23:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.492 ************************************ 00:06:26.492 END TEST rpc_integrity 00:06:26.492 ************************************ 00:06:26.492 16:23:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:26.492 16:23:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.492 16:23:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.492 16:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.492 ************************************ 00:06:26.492 START TEST rpc_plugins 00:06:26.492 ************************************ 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:26.492 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.492 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:26.492 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.492 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.492 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:26.492 { 00:06:26.492 "name": "Malloc1", 00:06:26.492 "aliases": [ 00:06:26.492 "31955399-04e7-40be-8fd0-ada57e430f23" 00:06:26.492 ], 00:06:26.492 "product_name": "Malloc disk", 00:06:26.492 "block_size": 4096, 00:06:26.492 "num_blocks": 256, 00:06:26.492 "uuid": "31955399-04e7-40be-8fd0-ada57e430f23", 00:06:26.492 "assigned_rate_limits": { 00:06:26.492 "rw_ios_per_sec": 0, 00:06:26.492 "rw_mbytes_per_sec": 0, 00:06:26.492 "r_mbytes_per_sec": 0, 00:06:26.492 "w_mbytes_per_sec": 0 00:06:26.492 }, 00:06:26.492 "claimed": false, 00:06:26.492 "zoned": false, 00:06:26.492 "supported_io_types": { 00:06:26.492 "read": true, 00:06:26.492 "write": true, 00:06:26.492 "unmap": true, 00:06:26.492 "flush": true, 00:06:26.492 "reset": true, 00:06:26.492 "nvme_admin": false, 00:06:26.492 "nvme_io": false, 00:06:26.492 "nvme_io_md": false, 00:06:26.492 "write_zeroes": true, 00:06:26.492 "zcopy": true, 00:06:26.492 "get_zone_info": false, 00:06:26.492 "zone_management": false, 00:06:26.492 "zone_append": false, 00:06:26.492 "compare": false, 00:06:26.492 "compare_and_write": false, 00:06:26.492 "abort": true, 00:06:26.492 "seek_hole": false, 00:06:26.492 "seek_data": false, 00:06:26.492 "copy": true, 00:06:26.492 "nvme_iov_md": false 00:06:26.492 }, 00:06:26.492 "memory_domains": [ 00:06:26.492 { 00:06:26.492 "dma_device_id": "system", 00:06:26.492 "dma_device_type": 1 00:06:26.492 }, 00:06:26.492 { 00:06:26.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.492 "dma_device_type": 2 00:06:26.492 } 00:06:26.492 ], 00:06:26.492 "driver_specific": {} 00:06:26.492 } 00:06:26.492 ]' 00:06:26.492 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:26.751 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:26.751 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.751 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.751 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:26.751 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:26.751 16:23:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:26.751 00:06:26.751 real 0m0.153s 00:06:26.751 user 0m0.096s 00:06:26.751 sys 0m0.022s 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.751 16:23:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 ************************************ 00:06:26.751 END TEST rpc_plugins 00:06:26.751 ************************************ 00:06:26.751 16:23:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:26.751 16:23:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.751 16:23:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.751 16:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 ************************************ 00:06:26.751 START TEST rpc_trace_cmd_test 00:06:26.751 ************************************ 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:26.751 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1511326", 00:06:26.751 "tpoint_group_mask": "0x8", 00:06:26.751 "iscsi_conn": { 00:06:26.751 "mask": "0x2", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "scsi": { 00:06:26.751 "mask": "0x4", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "bdev": { 00:06:26.751 "mask": "0x8", 00:06:26.751 "tpoint_mask": "0xffffffffffffffff" 00:06:26.751 }, 00:06:26.751 "nvmf_rdma": { 00:06:26.751 "mask": "0x10", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "nvmf_tcp": { 00:06:26.751 "mask": "0x20", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "ftl": { 00:06:26.751 "mask": "0x40", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "blobfs": { 00:06:26.751 "mask": "0x80", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "dsa": { 00:06:26.751 "mask": "0x200", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "thread": { 00:06:26.751 "mask": "0x400", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "nvme_pcie": { 00:06:26.751 "mask": "0x800", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "iaa": { 00:06:26.751 "mask": "0x1000", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "nvme_tcp": { 00:06:26.751 "mask": "0x2000", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "bdev_nvme": { 00:06:26.751 "mask": "0x4000", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 }, 00:06:26.751 "sock": { 00:06:26.751 "mask": "0x8000", 00:06:26.751 "tpoint_mask": "0x0" 00:06:26.751 } 00:06:26.751 }' 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:26.751 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:27.010 00:06:27.010 real 0m0.243s 00:06:27.010 user 0m0.206s 00:06:27.010 sys 0m0.029s 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.010 16:23:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.010 ************************************ 00:06:27.010 END TEST rpc_trace_cmd_test 00:06:27.010 ************************************ 00:06:27.010 16:23:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:27.010 16:23:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:27.010 16:23:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:27.010 16:23:23 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.010 16:23:23 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.010 16:23:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.010 ************************************ 00:06:27.010 START TEST rpc_daemon_integrity 00:06:27.010 ************************************ 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:27.010 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:27.269 { 00:06:27.269 "name": "Malloc2", 00:06:27.269 "aliases": [ 00:06:27.269 "c319a3ff-0276-4b45-8b4b-619c3479ad65" 00:06:27.269 ], 00:06:27.269 "product_name": "Malloc disk", 00:06:27.269 "block_size": 512, 00:06:27.269 "num_blocks": 16384, 00:06:27.269 "uuid": "c319a3ff-0276-4b45-8b4b-619c3479ad65", 00:06:27.269 "assigned_rate_limits": { 00:06:27.269 "rw_ios_per_sec": 0, 00:06:27.269 "rw_mbytes_per_sec": 0, 00:06:27.269 "r_mbytes_per_sec": 0, 00:06:27.269 "w_mbytes_per_sec": 0 00:06:27.269 }, 00:06:27.269 "claimed": false, 00:06:27.269 "zoned": false, 00:06:27.269 "supported_io_types": { 00:06:27.269 "read": true, 00:06:27.269 "write": true, 00:06:27.269 "unmap": true, 00:06:27.269 "flush": true, 00:06:27.269 "reset": true, 00:06:27.269 "nvme_admin": false, 00:06:27.269 "nvme_io": false, 00:06:27.269 "nvme_io_md": false, 00:06:27.269 "write_zeroes": true, 00:06:27.269 "zcopy": true, 00:06:27.269 "get_zone_info": false, 00:06:27.269 "zone_management": false, 00:06:27.269 "zone_append": false, 00:06:27.269 "compare": false, 00:06:27.269 "compare_and_write": false, 00:06:27.269 "abort": true, 00:06:27.269 "seek_hole": false, 00:06:27.269 "seek_data": false, 00:06:27.269 "copy": true, 00:06:27.269 "nvme_iov_md": false 00:06:27.269 }, 00:06:27.269 "memory_domains": [ 00:06:27.269 { 00:06:27.269 "dma_device_id": "system", 00:06:27.269 "dma_device_type": 1 00:06:27.269 }, 00:06:27.269 { 00:06:27.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.269 "dma_device_type": 2 00:06:27.269 } 00:06:27.269 ], 00:06:27.269 "driver_specific": {} 00:06:27.269 } 00:06:27.269 ]' 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.269 [2024-07-24 16:23:23.984074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:27.269 [2024-07-24 16:23:23.984134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.269 [2024-07-24 16:23:23.984169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:06:27.269 [2024-07-24 16:23:23.984187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.269 [2024-07-24 16:23:23.986910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.269 [2024-07-24 16:23:23.986947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:27.269 Passthru0 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.269 16:23:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.269 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.269 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:27.269 { 00:06:27.269 "name": "Malloc2", 00:06:27.269 "aliases": [ 00:06:27.269 "c319a3ff-0276-4b45-8b4b-619c3479ad65" 00:06:27.269 ], 00:06:27.269 "product_name": "Malloc disk", 00:06:27.269 "block_size": 512, 00:06:27.269 "num_blocks": 16384, 00:06:27.269 "uuid": "c319a3ff-0276-4b45-8b4b-619c3479ad65", 00:06:27.269 "assigned_rate_limits": { 00:06:27.269 "rw_ios_per_sec": 0, 00:06:27.269 "rw_mbytes_per_sec": 0, 00:06:27.269 "r_mbytes_per_sec": 0, 00:06:27.269 "w_mbytes_per_sec": 0 00:06:27.269 }, 00:06:27.269 "claimed": true, 00:06:27.269 "claim_type": "exclusive_write", 00:06:27.269 "zoned": false, 00:06:27.269 "supported_io_types": { 00:06:27.269 "read": true, 00:06:27.269 "write": true, 00:06:27.269 "unmap": true, 00:06:27.269 "flush": true, 00:06:27.269 "reset": true, 00:06:27.269 "nvme_admin": false, 00:06:27.269 "nvme_io": false, 00:06:27.269 "nvme_io_md": false, 00:06:27.269 "write_zeroes": true, 00:06:27.269 "zcopy": true, 00:06:27.269 "get_zone_info": false, 00:06:27.269 "zone_management": false, 00:06:27.269 "zone_append": false, 00:06:27.269 "compare": false, 00:06:27.269 "compare_and_write": false, 00:06:27.269 "abort": true, 00:06:27.269 "seek_hole": false, 00:06:27.269 "seek_data": false, 00:06:27.269 "copy": true, 00:06:27.269 "nvme_iov_md": false 00:06:27.269 }, 00:06:27.269 "memory_domains": [ 00:06:27.269 { 00:06:27.269 "dma_device_id": "system", 00:06:27.269 "dma_device_type": 1 00:06:27.269 }, 00:06:27.269 { 00:06:27.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.269 "dma_device_type": 2 00:06:27.269 } 00:06:27.269 ], 00:06:27.269 "driver_specific": {} 00:06:27.269 }, 00:06:27.269 { 00:06:27.269 "name": "Passthru0", 00:06:27.269 "aliases": [ 00:06:27.269 "2dbbf25d-4752-53b0-843c-b83ced054deb" 00:06:27.269 ], 00:06:27.269 "product_name": "passthru", 00:06:27.269 "block_size": 512, 00:06:27.269 "num_blocks": 16384, 00:06:27.269 "uuid": "2dbbf25d-4752-53b0-843c-b83ced054deb", 00:06:27.269 "assigned_rate_limits": { 00:06:27.269 "rw_ios_per_sec": 0, 00:06:27.269 "rw_mbytes_per_sec": 0, 00:06:27.269 "r_mbytes_per_sec": 0, 00:06:27.269 "w_mbytes_per_sec": 0 00:06:27.269 }, 00:06:27.269 "claimed": false, 00:06:27.269 "zoned": false, 00:06:27.269 "supported_io_types": { 00:06:27.269 "read": true, 00:06:27.270 "write": true, 00:06:27.270 "unmap": true, 00:06:27.270 "flush": true, 00:06:27.270 "reset": true, 00:06:27.270 "nvme_admin": false, 00:06:27.270 "nvme_io": false, 00:06:27.270 "nvme_io_md": false, 00:06:27.270 "write_zeroes": true, 00:06:27.270 "zcopy": true, 00:06:27.270 "get_zone_info": false, 00:06:27.270 "zone_management": false, 00:06:27.270 "zone_append": false, 00:06:27.270 "compare": false, 00:06:27.270 "compare_and_write": false, 00:06:27.270 "abort": true, 00:06:27.270 "seek_hole": false, 00:06:27.270 "seek_data": false, 00:06:27.270 "copy": true, 00:06:27.270 "nvme_iov_md": false 00:06:27.270 }, 00:06:27.270 "memory_domains": [ 00:06:27.270 { 00:06:27.270 "dma_device_id": "system", 00:06:27.270 "dma_device_type": 1 00:06:27.270 }, 00:06:27.270 { 00:06:27.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.270 "dma_device_type": 2 00:06:27.270 } 00:06:27.270 ], 00:06:27.270 "driver_specific": { 00:06:27.270 "passthru": { 00:06:27.270 "name": "Passthru0", 00:06:27.270 "base_bdev_name": "Malloc2" 00:06:27.270 } 00:06:27.270 } 00:06:27.270 } 00:06:27.270 ]' 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:27.270 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.529 16:23:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.529 00:06:27.529 real 0m0.309s 00:06:27.529 user 0m0.183s 00:06:27.529 sys 0m0.056s 00:06:27.529 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.529 16:23:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.529 ************************************ 00:06:27.529 END TEST rpc_daemon_integrity 00:06:27.529 ************************************ 00:06:27.529 16:23:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:27.529 16:23:24 rpc -- rpc/rpc.sh@84 -- # killprocess 1511326 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@950 -- # '[' -z 1511326 ']' 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@954 -- # kill -0 1511326 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@955 -- # uname 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1511326 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1511326' 00:06:27.529 killing process with pid 1511326 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@969 -- # kill 1511326 00:06:27.529 16:23:24 rpc -- common/autotest_common.sh@974 -- # wait 1511326 00:06:30.818 00:06:30.818 real 0m6.622s 00:06:30.818 user 0m7.155s 00:06:30.818 sys 0m1.120s 00:06:30.818 16:23:27 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.818 16:23:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.818 ************************************ 00:06:30.818 END TEST rpc 00:06:30.818 ************************************ 00:06:30.818 16:23:27 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:30.818 16:23:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.818 16:23:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.818 16:23:27 -- common/autotest_common.sh@10 -- # set +x 00:06:30.818 ************************************ 00:06:30.818 START TEST skip_rpc 00:06:30.818 ************************************ 00:06:30.818 16:23:27 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:30.818 * Looking for test storage... 00:06:30.818 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:30.818 16:23:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:30.818 16:23:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:30.818 16:23:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:30.818 16:23:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.818 16:23:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.818 16:23:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.077 ************************************ 00:06:31.077 START TEST skip_rpc 00:06:31.077 ************************************ 00:06:31.077 16:23:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:31.077 16:23:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1512561 00:06:31.077 16:23:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.077 16:23:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:31.077 16:23:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:31.077 [2024-07-24 16:23:27.839832] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:06:31.077 [2024-07-24 16:23:27.839940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512561 ] 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:31.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:31.336 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:31.336 [2024-07-24 16:23:28.068179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.595 [2024-07-24 16:23:28.355772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1512561 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1512561 ']' 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1512561 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1512561 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1512561' 00:06:36.865 killing process with pid 1512561 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1512561 00:06:36.865 16:23:32 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1512561 00:06:39.399 00:06:39.399 real 0m8.379s 00:06:39.399 user 0m7.841s 00:06:39.399 sys 0m0.534s 00:06:39.399 16:23:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.399 16:23:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.399 ************************************ 00:06:39.399 END TEST skip_rpc 00:06:39.399 ************************************ 00:06:39.399 16:23:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:39.399 16:23:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.399 16:23:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.399 16:23:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.399 ************************************ 00:06:39.399 START TEST skip_rpc_with_json 00:06:39.399 ************************************ 00:06:39.399 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:39.399 16:23:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:39.399 16:23:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1514092 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1514092 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1514092 ']' 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.400 16:23:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.659 [2024-07-24 16:23:36.290242] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:06:39.659 [2024-07-24 16:23:36.290358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514092 ] 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:39.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:39.659 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:39.659 [2024-07-24 16:23:36.512702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.227 [2024-07-24 16:23:36.795095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.605 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.605 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:41.605 16:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 [2024-07-24 16:23:38.030388] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:41.606 request: 00:06:41.606 { 00:06:41.606 "trtype": "tcp", 00:06:41.606 "method": "nvmf_get_transports", 00:06:41.606 "req_id": 1 00:06:41.606 } 00:06:41.606 Got JSON-RPC error response 00:06:41.606 response: 00:06:41.606 { 00:06:41.606 "code": -19, 00:06:41.606 "message": "No such device" 00:06:41.606 } 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 [2024-07-24 16:23:38.038517] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.606 16:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:41.606 { 00:06:41.606 "subsystems": [ 00:06:41.606 { 00:06:41.606 "subsystem": "keyring", 00:06:41.606 "config": [] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "iobuf", 00:06:41.606 "config": [ 00:06:41.606 { 00:06:41.606 "method": "iobuf_set_options", 00:06:41.606 "params": { 00:06:41.606 "small_pool_count": 8192, 00:06:41.606 "large_pool_count": 1024, 00:06:41.606 "small_bufsize": 8192, 00:06:41.606 "large_bufsize": 135168 00:06:41.606 } 00:06:41.606 } 00:06:41.606 ] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "sock", 00:06:41.606 "config": [ 00:06:41.606 { 00:06:41.606 "method": "sock_set_default_impl", 00:06:41.606 "params": { 00:06:41.606 "impl_name": "posix" 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "sock_impl_set_options", 00:06:41.606 "params": { 00:06:41.606 "impl_name": "ssl", 00:06:41.606 "recv_buf_size": 4096, 00:06:41.606 "send_buf_size": 4096, 00:06:41.606 "enable_recv_pipe": true, 00:06:41.606 "enable_quickack": false, 00:06:41.606 "enable_placement_id": 0, 00:06:41.606 "enable_zerocopy_send_server": true, 00:06:41.606 "enable_zerocopy_send_client": false, 00:06:41.606 "zerocopy_threshold": 0, 00:06:41.606 "tls_version": 0, 00:06:41.606 "enable_ktls": false 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "sock_impl_set_options", 00:06:41.606 "params": { 00:06:41.606 "impl_name": "posix", 00:06:41.606 "recv_buf_size": 2097152, 00:06:41.606 "send_buf_size": 2097152, 00:06:41.606 "enable_recv_pipe": true, 00:06:41.606 "enable_quickack": false, 00:06:41.606 "enable_placement_id": 0, 00:06:41.606 "enable_zerocopy_send_server": true, 00:06:41.606 "enable_zerocopy_send_client": false, 00:06:41.606 "zerocopy_threshold": 0, 00:06:41.606 "tls_version": 0, 00:06:41.606 "enable_ktls": false 00:06:41.606 } 00:06:41.606 } 00:06:41.606 ] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "vmd", 00:06:41.606 "config": [] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "accel", 00:06:41.606 "config": [ 00:06:41.606 { 00:06:41.606 "method": "accel_set_options", 00:06:41.606 "params": { 00:06:41.606 "small_cache_size": 128, 00:06:41.606 "large_cache_size": 16, 00:06:41.606 "task_count": 2048, 00:06:41.606 "sequence_count": 2048, 00:06:41.606 "buf_count": 2048 00:06:41.606 } 00:06:41.606 } 00:06:41.606 ] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "bdev", 00:06:41.606 "config": [ 00:06:41.606 { 00:06:41.606 "method": "bdev_set_options", 00:06:41.606 "params": { 00:06:41.606 "bdev_io_pool_size": 65535, 00:06:41.606 "bdev_io_cache_size": 256, 00:06:41.606 "bdev_auto_examine": true, 00:06:41.606 "iobuf_small_cache_size": 128, 00:06:41.606 "iobuf_large_cache_size": 16 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "bdev_raid_set_options", 00:06:41.606 "params": { 00:06:41.606 "process_window_size_kb": 1024, 00:06:41.606 "process_max_bandwidth_mb_sec": 0 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "bdev_iscsi_set_options", 00:06:41.606 "params": { 00:06:41.606 "timeout_sec": 30 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "bdev_nvme_set_options", 00:06:41.606 "params": { 00:06:41.606 "action_on_timeout": "none", 00:06:41.606 "timeout_us": 0, 00:06:41.606 "timeout_admin_us": 0, 00:06:41.606 "keep_alive_timeout_ms": 10000, 00:06:41.606 "arbitration_burst": 0, 00:06:41.606 "low_priority_weight": 0, 00:06:41.606 "medium_priority_weight": 0, 00:06:41.606 "high_priority_weight": 0, 00:06:41.606 "nvme_adminq_poll_period_us": 10000, 00:06:41.606 "nvme_ioq_poll_period_us": 0, 00:06:41.606 "io_queue_requests": 0, 00:06:41.606 "delay_cmd_submit": true, 00:06:41.606 "transport_retry_count": 4, 00:06:41.606 "bdev_retry_count": 3, 00:06:41.606 "transport_ack_timeout": 0, 00:06:41.606 "ctrlr_loss_timeout_sec": 0, 00:06:41.606 "reconnect_delay_sec": 0, 00:06:41.606 "fast_io_fail_timeout_sec": 0, 00:06:41.606 "disable_auto_failback": false, 00:06:41.606 "generate_uuids": false, 00:06:41.606 "transport_tos": 0, 00:06:41.606 "nvme_error_stat": false, 00:06:41.606 "rdma_srq_size": 0, 00:06:41.606 "io_path_stat": false, 00:06:41.606 "allow_accel_sequence": false, 00:06:41.606 "rdma_max_cq_size": 0, 00:06:41.606 "rdma_cm_event_timeout_ms": 0, 00:06:41.606 "dhchap_digests": [ 00:06:41.606 "sha256", 00:06:41.606 "sha384", 00:06:41.606 "sha512" 00:06:41.606 ], 00:06:41.606 "dhchap_dhgroups": [ 00:06:41.606 "null", 00:06:41.606 "ffdhe2048", 00:06:41.606 "ffdhe3072", 00:06:41.606 "ffdhe4096", 00:06:41.606 "ffdhe6144", 00:06:41.606 "ffdhe8192" 00:06:41.606 ] 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "bdev_nvme_set_hotplug", 00:06:41.606 "params": { 00:06:41.606 "period_us": 100000, 00:06:41.606 "enable": false 00:06:41.606 } 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "method": "bdev_wait_for_examine" 00:06:41.606 } 00:06:41.606 ] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "scsi", 00:06:41.606 "config": null 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "scheduler", 00:06:41.606 "config": [ 00:06:41.606 { 00:06:41.606 "method": "framework_set_scheduler", 00:06:41.606 "params": { 00:06:41.606 "name": "static" 00:06:41.606 } 00:06:41.606 } 00:06:41.606 ] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "vhost_scsi", 00:06:41.606 "config": [] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "vhost_blk", 00:06:41.606 "config": [] 00:06:41.606 }, 00:06:41.606 { 00:06:41.606 "subsystem": "ublk", 00:06:41.606 "config": [] 00:06:41.606 }, 00:06:41.606 { 00:06:41.607 "subsystem": "nbd", 00:06:41.607 "config": [] 00:06:41.607 }, 00:06:41.607 { 00:06:41.607 "subsystem": "nvmf", 00:06:41.607 "config": [ 00:06:41.607 { 00:06:41.607 "method": "nvmf_set_config", 00:06:41.607 "params": { 00:06:41.607 "discovery_filter": "match_any", 00:06:41.607 "admin_cmd_passthru": { 00:06:41.607 "identify_ctrlr": false 00:06:41.607 } 00:06:41.607 } 00:06:41.607 }, 00:06:41.607 { 00:06:41.607 "method": "nvmf_set_max_subsystems", 00:06:41.607 "params": { 00:06:41.607 "max_subsystems": 1024 00:06:41.607 } 00:06:41.607 }, 00:06:41.607 { 00:06:41.607 "method": "nvmf_set_crdt", 00:06:41.607 "params": { 00:06:41.607 "crdt1": 0, 00:06:41.607 "crdt2": 0, 00:06:41.607 "crdt3": 0 00:06:41.607 } 00:06:41.607 }, 00:06:41.607 { 00:06:41.607 "method": "nvmf_create_transport", 00:06:41.607 "params": { 00:06:41.607 "trtype": "TCP", 00:06:41.607 "max_queue_depth": 128, 00:06:41.607 "max_io_qpairs_per_ctrlr": 127, 00:06:41.607 "in_capsule_data_size": 4096, 00:06:41.607 "max_io_size": 131072, 00:06:41.607 "io_unit_size": 131072, 00:06:41.607 "max_aq_depth": 128, 00:06:41.607 "num_shared_buffers": 511, 00:06:41.607 "buf_cache_size": 4294967295, 00:06:41.607 "dif_insert_or_strip": false, 00:06:41.607 "zcopy": false, 00:06:41.607 "c2h_success": true, 00:06:41.607 "sock_priority": 0, 00:06:41.607 "abort_timeout_sec": 1, 00:06:41.607 "ack_timeout": 0, 00:06:41.607 "data_wr_pool_size": 0 00:06:41.607 } 00:06:41.607 } 00:06:41.607 ] 00:06:41.607 }, 00:06:41.607 { 00:06:41.607 "subsystem": "iscsi", 00:06:41.607 "config": [ 00:06:41.607 { 00:06:41.607 "method": "iscsi_set_options", 00:06:41.607 "params": { 00:06:41.607 "node_base": "iqn.2016-06.io.spdk", 00:06:41.607 "max_sessions": 128, 00:06:41.607 "max_connections_per_session": 2, 00:06:41.607 "max_queue_depth": 64, 00:06:41.607 "default_time2wait": 2, 00:06:41.607 "default_time2retain": 20, 00:06:41.607 "first_burst_length": 8192, 00:06:41.607 "immediate_data": true, 00:06:41.607 "allow_duplicated_isid": false, 00:06:41.607 "error_recovery_level": 0, 00:06:41.607 "nop_timeout": 60, 00:06:41.607 "nop_in_interval": 30, 00:06:41.607 "disable_chap": false, 00:06:41.607 "require_chap": false, 00:06:41.607 "mutual_chap": false, 00:06:41.607 "chap_group": 0, 00:06:41.607 "max_large_datain_per_connection": 64, 00:06:41.607 "max_r2t_per_connection": 4, 00:06:41.607 "pdu_pool_size": 36864, 00:06:41.607 "immediate_data_pool_size": 16384, 00:06:41.607 "data_out_pool_size": 2048 00:06:41.607 } 00:06:41.607 } 00:06:41.607 ] 00:06:41.607 } 00:06:41.607 ] 00:06:41.607 } 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1514092 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1514092 ']' 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1514092 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514092 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514092' 00:06:41.607 killing process with pid 1514092 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1514092 00:06:41.607 16:23:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1514092 00:06:44.900 16:23:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1514971 00:06:44.900 16:23:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:44.900 16:23:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1514971 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1514971 ']' 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1514971 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1514971 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1514971' 00:06:50.170 killing process with pid 1514971 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1514971 00:06:50.170 16:23:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1514971 00:06:53.458 16:23:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:53.458 16:23:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:53.458 00:06:53.458 real 0m13.759s 00:06:53.458 user 0m13.066s 00:06:53.458 sys 0m1.111s 00:06:53.458 16:23:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.458 16:23:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.458 ************************************ 00:06:53.458 END TEST skip_rpc_with_json 00:06:53.458 ************************************ 00:06:53.458 16:23:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:53.458 16:23:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.458 16:23:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.458 16:23:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.458 ************************************ 00:06:53.458 START TEST skip_rpc_with_delay 00:06:53.458 ************************************ 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:53.458 [2024-07-24 16:23:50.139689] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:53.458 [2024-07-24 16:23:50.139809] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.458 00:06:53.458 real 0m0.200s 00:06:53.458 user 0m0.106s 00:06:53.458 sys 0m0.093s 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.458 16:23:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:53.458 ************************************ 00:06:53.458 END TEST skip_rpc_with_delay 00:06:53.458 ************************************ 00:06:53.458 16:23:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:53.458 16:23:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:53.458 16:23:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:53.458 16:23:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.458 16:23:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.458 16:23:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.458 ************************************ 00:06:53.458 START TEST exit_on_failed_rpc_init 00:06:53.458 ************************************ 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1516597 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1516597 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1516597 ']' 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.458 16:23:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:53.718 [2024-07-24 16:23:50.428905] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:06:53.718 [2024-07-24 16:23:50.429025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516597 ] 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:53.718 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:53.718 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:53.977 [2024-07-24 16:23:50.658541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.236 [2024-07-24 16:23:50.944885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.613 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.613 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:55.614 16:23:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:55.614 [2024-07-24 16:23:52.334949] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:06:55.614 [2024-07-24 16:23:52.335064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516872 ] 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:55.614 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:55.614 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:55.873 [2024-07-24 16:23:52.548417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.132 [2024-07-24 16:23:52.820006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.132 [2024-07-24 16:23:52.820120] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:56.132 [2024-07-24 16:23:52.820147] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:56.132 [2024-07-24 16:23:52.820166] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1516597 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1516597 ']' 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1516597 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1516597 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1516597' 00:06:56.700 killing process with pid 1516597 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1516597 00:06:56.700 16:23:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1516597 00:06:59.989 00:06:59.989 real 0m6.521s 00:06:59.989 user 0m7.262s 00:06:59.989 sys 0m0.880s 00:06:59.989 16:23:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.989 16:23:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:59.989 ************************************ 00:06:59.989 END TEST exit_on_failed_rpc_init 00:06:59.989 ************************************ 00:07:00.247 16:23:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:00.247 00:07:00.247 real 0m29.299s 00:07:00.247 user 0m28.432s 00:07:00.247 sys 0m2.936s 00:07:00.247 16:23:56 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.247 16:23:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.247 ************************************ 00:07:00.247 END TEST skip_rpc 00:07:00.247 ************************************ 00:07:00.247 16:23:56 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:00.247 16:23:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.247 16:23:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.247 16:23:56 -- common/autotest_common.sh@10 -- # set +x 00:07:00.247 ************************************ 00:07:00.247 START TEST rpc_client 00:07:00.247 ************************************ 00:07:00.247 16:23:56 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:00.247 * Looking for test storage... 00:07:00.247 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:07:00.247 16:23:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:00.247 OK 00:07:00.247 16:23:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:00.247 00:07:00.247 real 0m0.158s 00:07:00.247 user 0m0.067s 00:07:00.247 sys 0m0.098s 00:07:00.247 16:23:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.247 16:23:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:00.247 ************************************ 00:07:00.247 END TEST rpc_client 00:07:00.247 ************************************ 00:07:00.506 16:23:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:00.506 16:23:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.506 16:23:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.506 16:23:57 -- common/autotest_common.sh@10 -- # set +x 00:07:00.506 ************************************ 00:07:00.507 START TEST json_config 00:07:00.507 ************************************ 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:00.507 16:23:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.507 16:23:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.507 16:23:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.507 16:23:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.507 16:23:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.507 16:23:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.507 16:23:57 json_config -- paths/export.sh@5 -- # export PATH 00:07:00.507 16:23:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@47 -- # : 0 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.507 16:23:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:00.507 INFO: JSON configuration test init 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.507 16:23:57 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:00.507 16:23:57 json_config -- json_config/common.sh@9 -- # local app=target 00:07:00.507 16:23:57 json_config -- json_config/common.sh@10 -- # shift 00:07:00.507 16:23:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:00.507 16:23:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:00.507 16:23:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:00.507 16:23:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.507 16:23:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.507 16:23:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1517781 00:07:00.507 16:23:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:00.507 Waiting for target to run... 00:07:00.507 16:23:57 json_config -- json_config/common.sh@25 -- # waitforlisten 1517781 /var/tmp/spdk_tgt.sock 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 1517781 ']' 00:07:00.507 16:23:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.507 16:23:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.766 [2024-07-24 16:23:57.428513] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:07:00.766 [2024-07-24 16:23:57.428637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517781 ] 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:01.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:01.334 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:01.334 [2024-07-24 16:23:58.032898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.592 [2024-07-24 16:23:58.288124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.158 16:23:58 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.158 16:23:58 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:02.158 16:23:58 json_config -- json_config/common.sh@26 -- # echo '' 00:07:02.158 00:07:02.158 16:23:58 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:02.158 16:23:58 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:02.158 16:23:58 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:02.158 16:23:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.158 16:23:58 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:07:02.158 16:23:58 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:07:02.158 16:23:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:07:02.417 16:23:59 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:02.417 16:23:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:02.676 [2024-07-24 16:23:59.379741] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:02.676 16:23:59 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:02.676 16:23:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:02.935 [2024-07-24 16:23:59.608371] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:02.935 16:23:59 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:02.936 16:23:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:02.936 16:23:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:02.936 16:23:59 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:02.936 16:23:59 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:02.936 16:23:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:03.504 [2024-07-24 16:24:00.123782] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:10.114 16:24:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.114 16:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:10.114 16:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@51 -- # sort 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:10.114 16:24:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.114 16:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:07:10.114 16:24:06 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.114 16:24:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:07:10.114 16:24:06 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:07:10.115 16:24:06 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:07:10.115 16:24:06 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:10.115 16:24:06 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:10.115 16:24:06 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:10.115 16:24:06 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:10.115 16:24:06 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:10.115 16:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:10.373 16:24:06 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:10.373 16:24:06 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:10.373 16:24:06 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:10.373 16:24:06 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:07:10.373 16:24:06 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:07:10.373 16:24:06 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:10.373 16:24:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:10.373 Nvme0n1p0 Nvme0n1p1 00:07:10.373 16:24:07 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:10.373 16:24:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:10.632 [2024-07-24 16:24:07.426339] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:10.632 [2024-07-24 16:24:07.426407] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:10.632 00:07:10.632 16:24:07 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:10.632 16:24:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:11.201 Malloc3 00:07:11.201 16:24:07 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:11.201 16:24:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:11.460 [2024-07-24 16:24:08.169771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:11.460 [2024-07-24 16:24:08.169843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.460 [2024-07-24 16:24:08.169881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044780 00:07:11.460 [2024-07-24 16:24:08.169898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.460 [2024-07-24 16:24:08.172732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.460 [2024-07-24 16:24:08.172769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:11.460 PTBdevFromMalloc3 00:07:11.460 16:24:08 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:11.460 16:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:12.028 Null0 00:07:12.028 16:24:08 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:12.028 16:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:12.287 Malloc0 00:07:12.287 16:24:08 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:12.287 16:24:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:12.546 Malloc1 00:07:12.546 16:24:09 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:12.546 16:24:09 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:12.805 102400+0 records in 00:07:12.805 102400+0 records out 00:07:12.805 104857600 bytes (105 MB, 100 MiB) copied, 0.28353 s, 370 MB/s 00:07:12.805 16:24:09 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:12.805 16:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:13.064 aio_disk 00:07:13.064 16:24:09 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:13.064 16:24:09 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:13.064 16:24:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:18.333 b1d215fe-bc37-428e-9945-75197b8fd6a7 00:07:18.333 16:24:14 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:18.333 16:24:14 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:18.333 16:24:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:18.333 16:24:14 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:18.333 16:24:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:18.333 16:24:14 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:18.333 16:24:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:18.333 16:24:14 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:18.333 16:24:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:18.333 16:24:15 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:07:18.333 16:24:15 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:18.333 16:24:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:18.592 MallocForCryptoBdev 00:07:18.592 16:24:15 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:07:18.592 16:24:15 json_config -- json_config/json_config.sh@163 -- # wc -l 00:07:18.592 16:24:15 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:07:18.592 16:24:15 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:07:18.592 16:24:15 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:18.592 16:24:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:18.851 [2024-07-24 16:24:15.577679] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:07:18.851 CryptoMallocBdev 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f22670d0-3048-4b58-8acb-d9ec49ae1781 bdev_register:006edaa6-e848-41d2-b6df-9d0a75ab708b bdev_register:1fe534bd-2428-4202-8005-a7c2a4c942e9 bdev_register:c420de22-1c9a-4cea-8195-8ef6048567c7 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:f22670d0-3048-4b58-8acb-d9ec49ae1781 bdev_register:006edaa6-e848-41d2-b6df-9d0a75ab708b bdev_register:1fe534bd-2428-4202-8005-a7c2a4c942e9 bdev_register:c420de22-1c9a-4cea-8195-8ef6048567c7 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@75 -- # sort 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@76 -- # sort 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:18.851 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:18.852 16:24:15 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:18.852 16:24:15 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:18.852 16:24:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:f22670d0-3048-4b58-8acb-d9ec49ae1781 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:006edaa6-e848-41d2-b6df-9d0a75ab708b 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:1fe534bd-2428-4202-8005-a7c2a4c942e9 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:c420de22-1c9a-4cea-8195-8ef6048567c7 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:19.111 16:24:15 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:006edaa6-e848-41d2-b6df-9d0a75ab708b bdev_register:1fe534bd-2428-4202-8005-a7c2a4c942e9 bdev_register:aio_disk bdev_register:c420de22-1c9a-4cea-8195-8ef6048567c7 bdev_register:CryptoMallocBdev bdev_register:f22670d0-3048-4b58-8acb-d9ec49ae1781 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\0\6\e\d\a\a\6\-\e\8\4\8\-\4\1\d\2\-\b\6\d\f\-\9\d\0\a\7\5\a\b\7\0\8\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\f\e\5\3\4\b\d\-\2\4\2\8\-\4\2\0\2\-\8\0\0\5\-\a\7\c\2\a\4\c\9\4\2\e\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\4\2\0\d\e\2\2\-\1\c\9\a\-\4\c\e\a\-\8\1\9\5\-\8\e\f\6\0\4\8\5\6\7\c\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\2\2\6\7\0\d\0\-\3\0\4\8\-\4\b\5\8\-\8\a\c\b\-\d\9\e\c\4\9\a\e\1\7\8\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@90 -- # cat 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:006edaa6-e848-41d2-b6df-9d0a75ab708b bdev_register:1fe534bd-2428-4202-8005-a7c2a4c942e9 bdev_register:aio_disk bdev_register:c420de22-1c9a-4cea-8195-8ef6048567c7 bdev_register:CryptoMallocBdev bdev_register:f22670d0-3048-4b58-8acb-d9ec49ae1781 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:07:19.112 Expected events matched: 00:07:19.112 bdev_register:006edaa6-e848-41d2-b6df-9d0a75ab708b 00:07:19.112 bdev_register:1fe534bd-2428-4202-8005-a7c2a4c942e9 00:07:19.112 bdev_register:aio_disk 00:07:19.112 bdev_register:c420de22-1c9a-4cea-8195-8ef6048567c7 00:07:19.112 bdev_register:CryptoMallocBdev 00:07:19.112 bdev_register:f22670d0-3048-4b58-8acb-d9ec49ae1781 00:07:19.112 bdev_register:Malloc0 00:07:19.112 bdev_register:Malloc0p0 00:07:19.112 bdev_register:Malloc0p1 00:07:19.112 bdev_register:Malloc0p2 00:07:19.112 bdev_register:Malloc1 00:07:19.112 bdev_register:Malloc3 00:07:19.112 bdev_register:MallocForCryptoBdev 00:07:19.112 bdev_register:Null0 00:07:19.112 bdev_register:Nvme0n1 00:07:19.112 bdev_register:Nvme0n1p0 00:07:19.112 bdev_register:Nvme0n1p1 00:07:19.112 bdev_register:PTBdevFromMalloc3 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:07:19.112 16:24:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.112 16:24:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:19.112 16:24:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.112 16:24:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:19.112 16:24:15 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:19.112 16:24:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:19.371 MallocBdevForConfigChangeCheck 00:07:19.371 16:24:16 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:19.371 16:24:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.371 16:24:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.371 16:24:16 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:19.371 16:24:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:19.938 16:24:16 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:19.938 INFO: shutting down applications... 00:07:19.938 16:24:16 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:19.938 16:24:16 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:19.938 16:24:16 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:19.938 16:24:16 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:19.938 [2024-07-24 16:24:16.705115] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:23.226 Calling clear_iscsi_subsystem 00:07:23.226 Calling clear_nvmf_subsystem 00:07:23.226 Calling clear_nbd_subsystem 00:07:23.226 Calling clear_ublk_subsystem 00:07:23.226 Calling clear_vhost_blk_subsystem 00:07:23.226 Calling clear_vhost_scsi_subsystem 00:07:23.226 Calling clear_bdev_subsystem 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@349 -- # break 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:23.226 16:24:19 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:23.226 16:24:19 json_config -- json_config/common.sh@31 -- # local app=target 00:07:23.226 16:24:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:23.226 16:24:19 json_config -- json_config/common.sh@35 -- # [[ -n 1517781 ]] 00:07:23.226 16:24:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1517781 00:07:23.226 16:24:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:23.226 16:24:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:23.226 16:24:19 json_config -- json_config/common.sh@41 -- # kill -0 1517781 00:07:23.226 16:24:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:23.794 16:24:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:23.794 16:24:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:23.794 16:24:20 json_config -- json_config/common.sh@41 -- # kill -0 1517781 00:07:23.794 16:24:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:24.362 16:24:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:24.362 16:24:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:24.362 16:24:20 json_config -- json_config/common.sh@41 -- # kill -0 1517781 00:07:24.362 16:24:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:24.621 16:24:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:24.621 16:24:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:24.621 16:24:21 json_config -- json_config/common.sh@41 -- # kill -0 1517781 00:07:24.621 16:24:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:25.190 16:24:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:25.190 16:24:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:25.190 16:24:21 json_config -- json_config/common.sh@41 -- # kill -0 1517781 00:07:25.190 16:24:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:25.757 16:24:22 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:25.757 16:24:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:25.757 16:24:22 json_config -- json_config/common.sh@41 -- # kill -0 1517781 00:07:25.757 16:24:22 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:25.757 16:24:22 json_config -- json_config/common.sh@43 -- # break 00:07:25.757 16:24:22 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:25.757 16:24:22 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:25.757 SPDK target shutdown done 00:07:25.757 16:24:22 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:25.757 INFO: relaunching applications... 00:07:25.757 16:24:22 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:25.757 16:24:22 json_config -- json_config/common.sh@9 -- # local app=target 00:07:25.757 16:24:22 json_config -- json_config/common.sh@10 -- # shift 00:07:25.757 16:24:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:25.757 16:24:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:25.757 16:24:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:25.757 16:24:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.757 16:24:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:25.757 16:24:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1522956 00:07:25.757 16:24:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:25.757 Waiting for target to run... 00:07:25.757 16:24:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:25.757 16:24:22 json_config -- json_config/common.sh@25 -- # waitforlisten 1522956 /var/tmp/spdk_tgt.sock 00:07:25.757 16:24:22 json_config -- common/autotest_common.sh@831 -- # '[' -z 1522956 ']' 00:07:25.757 16:24:22 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:25.757 16:24:22 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.757 16:24:22 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:25.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:25.757 16:24:22 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.757 16:24:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.757 [2024-07-24 16:24:22.579286] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:07:25.757 [2024-07-24 16:24:22.579385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522956 ] 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:26.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.324 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:26.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:26.325 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:26.325 [2024-07-24 16:24:23.147586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.583 [2024-07-24 16:24:23.399492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.841 [2024-07-24 16:24:23.454199] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:07:26.841 [2024-07-24 16:24:23.462247] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:26.841 [2024-07-24 16:24:23.470253] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:27.137 [2024-07-24 16:24:23.799904] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:31.328 [2024-07-24 16:24:27.290799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:31.328 [2024-07-24 16:24:27.290879] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:31.328 [2024-07-24 16:24:27.290899] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:31.328 [2024-07-24 16:24:27.298817] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:31.328 [2024-07-24 16:24:27.298870] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:31.328 [2024-07-24 16:24:27.306822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:31.328 [2024-07-24 16:24:27.306865] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:31.328 [2024-07-24 16:24:27.314862] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:07:31.328 [2024-07-24 16:24:27.314926] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:07:31.328 [2024-07-24 16:24:27.314947] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:33.862 [2024-07-24 16:24:30.305423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:33.862 [2024-07-24 16:24:30.305497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.862 [2024-07-24 16:24:30.305519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:07:33.862 [2024-07-24 16:24:30.305534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.862 [2024-07-24 16:24:30.306106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.862 [2024-07-24 16:24:30.306132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:34.120 16:24:30 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:34.120 16:24:30 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:34.120 16:24:30 json_config -- json_config/common.sh@26 -- # echo '' 00:07:34.120 00:07:34.120 16:24:30 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:34.120 16:24:30 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:34.120 INFO: Checking if target configuration is the same... 00:07:34.120 16:24:30 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:34.121 16:24:30 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.121 16:24:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.121 + '[' 2 -ne 2 ']' 00:07:34.121 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:34.121 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:34.121 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:34.121 +++ basename /dev/fd/62 00:07:34.121 ++ mktemp /tmp/62.XXX 00:07:34.121 + tmp_file_1=/tmp/62.ILH 00:07:34.121 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.121 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:34.121 + tmp_file_2=/tmp/spdk_tgt_config.json.AP6 00:07:34.121 + ret=0 00:07:34.121 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.380 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.380 + diff -u /tmp/62.ILH /tmp/spdk_tgt_config.json.AP6 00:07:34.380 + echo 'INFO: JSON config files are the same' 00:07:34.380 INFO: JSON config files are the same 00:07:34.380 + rm /tmp/62.ILH /tmp/spdk_tgt_config.json.AP6 00:07:34.380 + exit 0 00:07:34.380 16:24:31 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:34.380 16:24:31 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:34.380 INFO: changing configuration and checking if this can be detected... 00:07:34.380 16:24:31 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:34.380 16:24:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:34.639 16:24:31 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.639 16:24:31 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:34.639 16:24:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:34.639 + '[' 2 -ne 2 ']' 00:07:34.639 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:34.639 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:34.639 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:34.639 +++ basename /dev/fd/62 00:07:34.639 ++ mktemp /tmp/62.XXX 00:07:34.639 + tmp_file_1=/tmp/62.iDk 00:07:34.639 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:34.639 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:34.639 + tmp_file_2=/tmp/spdk_tgt_config.json.IV1 00:07:34.639 + ret=0 00:07:34.639 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.898 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:35.157 + diff -u /tmp/62.iDk /tmp/spdk_tgt_config.json.IV1 00:07:35.157 + ret=1 00:07:35.157 + echo '=== Start of file: /tmp/62.iDk ===' 00:07:35.157 + cat /tmp/62.iDk 00:07:35.157 + echo '=== End of file: /tmp/62.iDk ===' 00:07:35.157 + echo '' 00:07:35.157 + echo '=== Start of file: /tmp/spdk_tgt_config.json.IV1 ===' 00:07:35.157 + cat /tmp/spdk_tgt_config.json.IV1 00:07:35.157 + echo '=== End of file: /tmp/spdk_tgt_config.json.IV1 ===' 00:07:35.157 + echo '' 00:07:35.157 + rm /tmp/62.iDk /tmp/spdk_tgt_config.json.IV1 00:07:35.157 + exit 1 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:35.157 INFO: configuration change detected. 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:35.157 16:24:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.157 16:24:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@321 -- # [[ -n 1522956 ]] 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:35.157 16:24:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:35.157 16:24:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:35.157 16:24:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:35.157 16:24:31 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:35.157 16:24:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:35.416 16:24:32 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:35.416 16:24:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:35.674 16:24:32 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:35.674 16:24:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:35.674 16:24:32 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:35.674 16:24:32 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:35.674 16:24:32 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:35.674 16:24:32 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:35.674 16:24:32 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:35.674 16:24:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:35.675 16:24:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.675 16:24:32 json_config -- json_config/json_config.sh@327 -- # killprocess 1522956 00:07:35.675 16:24:32 json_config -- common/autotest_common.sh@950 -- # '[' -z 1522956 ']' 00:07:35.675 16:24:32 json_config -- common/autotest_common.sh@954 -- # kill -0 1522956 00:07:35.675 16:24:32 json_config -- common/autotest_common.sh@955 -- # uname 00:07:35.675 16:24:32 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:35.675 16:24:32 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522956 00:07:35.933 16:24:32 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:35.933 16:24:32 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:35.933 16:24:32 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522956' 00:07:35.933 killing process with pid 1522956 00:07:35.933 16:24:32 json_config -- common/autotest_common.sh@969 -- # kill 1522956 00:07:35.933 16:24:32 json_config -- common/autotest_common.sh@974 -- # wait 1522956 00:07:41.203 16:24:37 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:41.203 16:24:37 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:41.203 16:24:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.203 16:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:41.203 16:24:37 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:41.203 16:24:37 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:41.203 INFO: Success 00:07:41.203 00:07:41.203 real 0m40.112s 00:07:41.203 user 0m44.590s 00:07:41.203 sys 0m4.488s 00:07:41.203 16:24:37 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.203 16:24:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:41.203 ************************************ 00:07:41.203 END TEST json_config 00:07:41.203 ************************************ 00:07:41.203 16:24:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:41.203 16:24:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.203 16:24:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.203 16:24:37 -- common/autotest_common.sh@10 -- # set +x 00:07:41.203 ************************************ 00:07:41.203 START TEST json_config_extra_key 00:07:41.203 ************************************ 00:07:41.203 16:24:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:41.203 16:24:37 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.203 16:24:37 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.203 16:24:37 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.203 16:24:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.203 16:24:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.203 16:24:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.203 16:24:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:41.203 16:24:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:41.203 16:24:37 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:41.203 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:41.204 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:41.204 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:41.204 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:41.204 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:41.204 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:41.204 INFO: launching applications... 00:07:41.204 16:24:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1525572 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:41.204 Waiting for target to run... 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1525572 /var/tmp/spdk_tgt.sock 00:07:41.204 16:24:37 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1525572 ']' 00:07:41.204 16:24:37 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:41.204 16:24:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:41.204 16:24:37 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.204 16:24:37 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:41.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:41.204 16:24:37 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.204 16:24:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:41.204 [2024-07-24 16:24:37.615042] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:07:41.204 [2024-07-24 16:24:37.615179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525572 ] 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:41.204 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.204 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:41.463 [2024-07-24 16:24:38.066386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.463 [2024-07-24 16:24:38.320482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.841 16:24:39 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.841 16:24:39 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:42.841 00:07:42.841 16:24:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:42.841 INFO: shutting down applications... 00:07:42.841 16:24:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1525572 ]] 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1525572 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1525572 00:07:42.841 16:24:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.100 16:24:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.101 16:24:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.101 16:24:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1525572 00:07:43.101 16:24:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:43.669 16:24:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:43.669 16:24:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:43.669 16:24:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1525572 00:07:43.669 16:24:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1525572 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:44.238 16:24:40 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:44.238 SPDK target shutdown done 00:07:44.238 16:24:40 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:44.238 Success 00:07:44.238 00:07:44.238 real 0m3.446s 00:07:44.238 user 0m2.880s 00:07:44.238 sys 0m0.676s 00:07:44.238 16:24:40 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.238 16:24:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:44.238 ************************************ 00:07:44.238 END TEST json_config_extra_key 00:07:44.238 ************************************ 00:07:44.238 16:24:40 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:44.238 16:24:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.238 16:24:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.238 16:24:40 -- common/autotest_common.sh@10 -- # set +x 00:07:44.238 ************************************ 00:07:44.238 START TEST alias_rpc 00:07:44.238 ************************************ 00:07:44.238 16:24:40 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:44.238 * Looking for test storage... 00:07:44.238 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:07:44.238 16:24:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:44.238 16:24:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1526297 00:07:44.238 16:24:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1526297 00:07:44.238 16:24:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:44.238 16:24:41 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1526297 ']' 00:07:44.238 16:24:41 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.238 16:24:41 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.238 16:24:41 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.238 16:24:41 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.238 16:24:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.497 [2024-07-24 16:24:41.151266] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:07:44.497 [2024-07-24 16:24:41.151391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526297 ] 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.497 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.756 [2024-07-24 16:24:41.363459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.015 [2024-07-24 16:24:41.620787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.393 16:24:42 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.393 16:24:42 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:46.393 16:24:42 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:46.393 16:24:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1526297 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1526297 ']' 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1526297 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526297 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526297' 00:07:46.393 killing process with pid 1526297 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@969 -- # kill 1526297 00:07:46.393 16:24:43 alias_rpc -- common/autotest_common.sh@974 -- # wait 1526297 00:07:49.694 00:07:49.694 real 0m5.603s 00:07:49.694 user 0m5.547s 00:07:49.694 sys 0m0.768s 00:07:49.694 16:24:46 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.694 16:24:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.694 ************************************ 00:07:49.694 END TEST alias_rpc 00:07:49.694 ************************************ 00:07:49.953 16:24:46 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:49.953 16:24:46 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:49.953 16:24:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.953 16:24:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.953 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:07:49.953 ************************************ 00:07:49.953 START TEST spdkcli_tcp 00:07:49.953 ************************************ 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:49.953 * Looking for test storage... 00:07:49.953 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1527220 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1527220 00:07:49.953 16:24:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1527220 ']' 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.953 16:24:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.212 [2024-07-24 16:24:46.846029] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:07:50.212 [2024-07-24 16:24:46.846158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527220 ] 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:50.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.212 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:50.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:50.213 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:50.471 [2024-07-24 16:24:47.074430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.730 [2024-07-24 16:24:47.366926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.730 [2024-07-24 16:24:47.366934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.107 16:24:48 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.107 16:24:48 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:52.107 16:24:48 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1527620 00:07:52.107 16:24:48 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:52.107 16:24:48 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:52.107 [ 00:07:52.107 "bdev_malloc_delete", 00:07:52.107 "bdev_malloc_create", 00:07:52.107 "bdev_null_resize", 00:07:52.107 "bdev_null_delete", 00:07:52.107 "bdev_null_create", 00:07:52.107 "bdev_nvme_cuse_unregister", 00:07:52.107 "bdev_nvme_cuse_register", 00:07:52.107 "bdev_opal_new_user", 00:07:52.107 "bdev_opal_set_lock_state", 00:07:52.107 "bdev_opal_delete", 00:07:52.107 "bdev_opal_get_info", 00:07:52.107 "bdev_opal_create", 00:07:52.107 "bdev_nvme_opal_revert", 00:07:52.107 "bdev_nvme_opal_init", 00:07:52.107 "bdev_nvme_send_cmd", 00:07:52.107 "bdev_nvme_get_path_iostat", 00:07:52.107 "bdev_nvme_get_mdns_discovery_info", 00:07:52.107 "bdev_nvme_stop_mdns_discovery", 00:07:52.107 "bdev_nvme_start_mdns_discovery", 00:07:52.107 "bdev_nvme_set_multipath_policy", 00:07:52.107 "bdev_nvme_set_preferred_path", 00:07:52.107 "bdev_nvme_get_io_paths", 00:07:52.107 "bdev_nvme_remove_error_injection", 00:07:52.107 "bdev_nvme_add_error_injection", 00:07:52.107 "bdev_nvme_get_discovery_info", 00:07:52.107 "bdev_nvme_stop_discovery", 00:07:52.107 "bdev_nvme_start_discovery", 00:07:52.107 "bdev_nvme_get_controller_health_info", 00:07:52.107 "bdev_nvme_disable_controller", 00:07:52.107 "bdev_nvme_enable_controller", 00:07:52.107 "bdev_nvme_reset_controller", 00:07:52.107 "bdev_nvme_get_transport_statistics", 00:07:52.107 "bdev_nvme_apply_firmware", 00:07:52.107 "bdev_nvme_detach_controller", 00:07:52.107 "bdev_nvme_get_controllers", 00:07:52.107 "bdev_nvme_attach_controller", 00:07:52.107 "bdev_nvme_set_hotplug", 00:07:52.107 "bdev_nvme_set_options", 00:07:52.107 "bdev_passthru_delete", 00:07:52.107 "bdev_passthru_create", 00:07:52.107 "bdev_lvol_set_parent_bdev", 00:07:52.107 "bdev_lvol_set_parent", 00:07:52.107 "bdev_lvol_check_shallow_copy", 00:07:52.107 "bdev_lvol_start_shallow_copy", 00:07:52.107 "bdev_lvol_grow_lvstore", 00:07:52.107 "bdev_lvol_get_lvols", 00:07:52.107 "bdev_lvol_get_lvstores", 00:07:52.107 "bdev_lvol_delete", 00:07:52.107 "bdev_lvol_set_read_only", 00:07:52.107 "bdev_lvol_resize", 00:07:52.107 "bdev_lvol_decouple_parent", 00:07:52.107 "bdev_lvol_inflate", 00:07:52.107 "bdev_lvol_rename", 00:07:52.107 "bdev_lvol_clone_bdev", 00:07:52.107 "bdev_lvol_clone", 00:07:52.107 "bdev_lvol_snapshot", 00:07:52.107 "bdev_lvol_create", 00:07:52.107 "bdev_lvol_delete_lvstore", 00:07:52.107 "bdev_lvol_rename_lvstore", 00:07:52.107 "bdev_lvol_create_lvstore", 00:07:52.107 "bdev_raid_set_options", 00:07:52.107 "bdev_raid_remove_base_bdev", 00:07:52.107 "bdev_raid_add_base_bdev", 00:07:52.107 "bdev_raid_delete", 00:07:52.107 "bdev_raid_create", 00:07:52.107 "bdev_raid_get_bdevs", 00:07:52.107 "bdev_error_inject_error", 00:07:52.107 "bdev_error_delete", 00:07:52.107 "bdev_error_create", 00:07:52.107 "bdev_split_delete", 00:07:52.107 "bdev_split_create", 00:07:52.107 "bdev_delay_delete", 00:07:52.107 "bdev_delay_create", 00:07:52.107 "bdev_delay_update_latency", 00:07:52.107 "bdev_zone_block_delete", 00:07:52.107 "bdev_zone_block_create", 00:07:52.107 "blobfs_create", 00:07:52.107 "blobfs_detect", 00:07:52.107 "blobfs_set_cache_size", 00:07:52.107 "bdev_crypto_delete", 00:07:52.107 "bdev_crypto_create", 00:07:52.107 "bdev_compress_delete", 00:07:52.107 "bdev_compress_create", 00:07:52.107 "bdev_compress_get_orphans", 00:07:52.107 "bdev_aio_delete", 00:07:52.107 "bdev_aio_rescan", 00:07:52.107 "bdev_aio_create", 00:07:52.107 "bdev_ftl_set_property", 00:07:52.107 "bdev_ftl_get_properties", 00:07:52.107 "bdev_ftl_get_stats", 00:07:52.107 "bdev_ftl_unmap", 00:07:52.107 "bdev_ftl_unload", 00:07:52.107 "bdev_ftl_delete", 00:07:52.107 "bdev_ftl_load", 00:07:52.107 "bdev_ftl_create", 00:07:52.107 "bdev_virtio_attach_controller", 00:07:52.107 "bdev_virtio_scsi_get_devices", 00:07:52.107 "bdev_virtio_detach_controller", 00:07:52.107 "bdev_virtio_blk_set_hotplug", 00:07:52.107 "bdev_iscsi_delete", 00:07:52.107 "bdev_iscsi_create", 00:07:52.107 "bdev_iscsi_set_options", 00:07:52.107 "accel_error_inject_error", 00:07:52.107 "ioat_scan_accel_module", 00:07:52.107 "dsa_scan_accel_module", 00:07:52.107 "iaa_scan_accel_module", 00:07:52.107 "dpdk_cryptodev_get_driver", 00:07:52.107 "dpdk_cryptodev_set_driver", 00:07:52.108 "dpdk_cryptodev_scan_accel_module", 00:07:52.108 "compressdev_scan_accel_module", 00:07:52.108 "keyring_file_remove_key", 00:07:52.108 "keyring_file_add_key", 00:07:52.108 "keyring_linux_set_options", 00:07:52.108 "iscsi_get_histogram", 00:07:52.108 "iscsi_enable_histogram", 00:07:52.108 "iscsi_set_options", 00:07:52.108 "iscsi_get_auth_groups", 00:07:52.108 "iscsi_auth_group_remove_secret", 00:07:52.108 "iscsi_auth_group_add_secret", 00:07:52.108 "iscsi_delete_auth_group", 00:07:52.108 "iscsi_create_auth_group", 00:07:52.108 "iscsi_set_discovery_auth", 00:07:52.108 "iscsi_get_options", 00:07:52.108 "iscsi_target_node_request_logout", 00:07:52.108 "iscsi_target_node_set_redirect", 00:07:52.108 "iscsi_target_node_set_auth", 00:07:52.108 "iscsi_target_node_add_lun", 00:07:52.108 "iscsi_get_stats", 00:07:52.108 "iscsi_get_connections", 00:07:52.108 "iscsi_portal_group_set_auth", 00:07:52.108 "iscsi_start_portal_group", 00:07:52.108 "iscsi_delete_portal_group", 00:07:52.108 "iscsi_create_portal_group", 00:07:52.108 "iscsi_get_portal_groups", 00:07:52.108 "iscsi_delete_target_node", 00:07:52.108 "iscsi_target_node_remove_pg_ig_maps", 00:07:52.108 "iscsi_target_node_add_pg_ig_maps", 00:07:52.108 "iscsi_create_target_node", 00:07:52.108 "iscsi_get_target_nodes", 00:07:52.108 "iscsi_delete_initiator_group", 00:07:52.108 "iscsi_initiator_group_remove_initiators", 00:07:52.108 "iscsi_initiator_group_add_initiators", 00:07:52.108 "iscsi_create_initiator_group", 00:07:52.108 "iscsi_get_initiator_groups", 00:07:52.108 "nvmf_set_crdt", 00:07:52.108 "nvmf_set_config", 00:07:52.108 "nvmf_set_max_subsystems", 00:07:52.108 "nvmf_stop_mdns_prr", 00:07:52.108 "nvmf_publish_mdns_prr", 00:07:52.108 "nvmf_subsystem_get_listeners", 00:07:52.108 "nvmf_subsystem_get_qpairs", 00:07:52.108 "nvmf_subsystem_get_controllers", 00:07:52.108 "nvmf_get_stats", 00:07:52.108 "nvmf_get_transports", 00:07:52.108 "nvmf_create_transport", 00:07:52.108 "nvmf_get_targets", 00:07:52.108 "nvmf_delete_target", 00:07:52.108 "nvmf_create_target", 00:07:52.108 "nvmf_subsystem_allow_any_host", 00:07:52.108 "nvmf_subsystem_remove_host", 00:07:52.108 "nvmf_subsystem_add_host", 00:07:52.108 "nvmf_ns_remove_host", 00:07:52.108 "nvmf_ns_add_host", 00:07:52.108 "nvmf_subsystem_remove_ns", 00:07:52.108 "nvmf_subsystem_add_ns", 00:07:52.108 "nvmf_subsystem_listener_set_ana_state", 00:07:52.108 "nvmf_discovery_get_referrals", 00:07:52.108 "nvmf_discovery_remove_referral", 00:07:52.108 "nvmf_discovery_add_referral", 00:07:52.108 "nvmf_subsystem_remove_listener", 00:07:52.108 "nvmf_subsystem_add_listener", 00:07:52.108 "nvmf_delete_subsystem", 00:07:52.108 "nvmf_create_subsystem", 00:07:52.108 "nvmf_get_subsystems", 00:07:52.108 "env_dpdk_get_mem_stats", 00:07:52.108 "nbd_get_disks", 00:07:52.108 "nbd_stop_disk", 00:07:52.108 "nbd_start_disk", 00:07:52.108 "ublk_recover_disk", 00:07:52.108 "ublk_get_disks", 00:07:52.108 "ublk_stop_disk", 00:07:52.108 "ublk_start_disk", 00:07:52.108 "ublk_destroy_target", 00:07:52.108 "ublk_create_target", 00:07:52.108 "virtio_blk_create_transport", 00:07:52.108 "virtio_blk_get_transports", 00:07:52.108 "vhost_controller_set_coalescing", 00:07:52.108 "vhost_get_controllers", 00:07:52.108 "vhost_delete_controller", 00:07:52.108 "vhost_create_blk_controller", 00:07:52.108 "vhost_scsi_controller_remove_target", 00:07:52.108 "vhost_scsi_controller_add_target", 00:07:52.108 "vhost_start_scsi_controller", 00:07:52.108 "vhost_create_scsi_controller", 00:07:52.108 "thread_set_cpumask", 00:07:52.108 "framework_get_governor", 00:07:52.108 "framework_get_scheduler", 00:07:52.108 "framework_set_scheduler", 00:07:52.108 "framework_get_reactors", 00:07:52.108 "thread_get_io_channels", 00:07:52.108 "thread_get_pollers", 00:07:52.108 "thread_get_stats", 00:07:52.108 "framework_monitor_context_switch", 00:07:52.108 "spdk_kill_instance", 00:07:52.108 "log_enable_timestamps", 00:07:52.108 "log_get_flags", 00:07:52.108 "log_clear_flag", 00:07:52.108 "log_set_flag", 00:07:52.108 "log_get_level", 00:07:52.108 "log_set_level", 00:07:52.108 "log_get_print_level", 00:07:52.108 "log_set_print_level", 00:07:52.108 "framework_enable_cpumask_locks", 00:07:52.108 "framework_disable_cpumask_locks", 00:07:52.108 "framework_wait_init", 00:07:52.108 "framework_start_init", 00:07:52.108 "scsi_get_devices", 00:07:52.108 "bdev_get_histogram", 00:07:52.108 "bdev_enable_histogram", 00:07:52.108 "bdev_set_qos_limit", 00:07:52.108 "bdev_set_qd_sampling_period", 00:07:52.108 "bdev_get_bdevs", 00:07:52.108 "bdev_reset_iostat", 00:07:52.108 "bdev_get_iostat", 00:07:52.108 "bdev_examine", 00:07:52.108 "bdev_wait_for_examine", 00:07:52.108 "bdev_set_options", 00:07:52.108 "notify_get_notifications", 00:07:52.108 "notify_get_types", 00:07:52.108 "accel_get_stats", 00:07:52.108 "accel_set_options", 00:07:52.108 "accel_set_driver", 00:07:52.108 "accel_crypto_key_destroy", 00:07:52.108 "accel_crypto_keys_get", 00:07:52.108 "accel_crypto_key_create", 00:07:52.108 "accel_assign_opc", 00:07:52.108 "accel_get_module_info", 00:07:52.108 "accel_get_opc_assignments", 00:07:52.108 "vmd_rescan", 00:07:52.108 "vmd_remove_device", 00:07:52.108 "vmd_enable", 00:07:52.108 "sock_get_default_impl", 00:07:52.108 "sock_set_default_impl", 00:07:52.108 "sock_impl_set_options", 00:07:52.108 "sock_impl_get_options", 00:07:52.109 "iobuf_get_stats", 00:07:52.109 "iobuf_set_options", 00:07:52.109 "framework_get_pci_devices", 00:07:52.109 "framework_get_config", 00:07:52.109 "framework_get_subsystems", 00:07:52.109 "trace_get_info", 00:07:52.109 "trace_get_tpoint_group_mask", 00:07:52.109 "trace_disable_tpoint_group", 00:07:52.109 "trace_enable_tpoint_group", 00:07:52.109 "trace_clear_tpoint_mask", 00:07:52.109 "trace_set_tpoint_mask", 00:07:52.109 "keyring_get_keys", 00:07:52.109 "spdk_get_version", 00:07:52.109 "rpc_get_methods" 00:07:52.109 ] 00:07:52.109 16:24:48 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.109 16:24:48 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:52.109 16:24:48 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1527220 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1527220 ']' 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1527220 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1527220 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1527220' 00:07:52.109 killing process with pid 1527220 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1527220 00:07:52.109 16:24:48 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1527220 00:07:55.391 00:07:55.391 real 0m5.507s 00:07:55.391 user 0m9.644s 00:07:55.391 sys 0m0.771s 00:07:55.391 16:24:52 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.391 16:24:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.391 ************************************ 00:07:55.391 END TEST spdkcli_tcp 00:07:55.391 ************************************ 00:07:55.391 16:24:52 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:55.391 16:24:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.391 16:24:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.391 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:07:55.391 ************************************ 00:07:55.391 START TEST dpdk_mem_utility 00:07:55.391 ************************************ 00:07:55.391 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:55.650 * Looking for test storage... 00:07:55.650 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:07:55.650 16:24:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:55.650 16:24:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1528274 00:07:55.650 16:24:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:55.650 16:24:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1528274 00:07:55.650 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1528274 ']' 00:07:55.650 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.650 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.650 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.650 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.650 16:24:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:55.650 [2024-07-24 16:24:52.425912] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:07:55.650 [2024-07-24 16:24:52.426034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528274 ] 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:55.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:55.909 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:55.909 [2024-07-24 16:24:52.637710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.168 [2024-07-24 16:24:52.923125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.546 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.546 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:57.546 16:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:57.546 16:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:57.546 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.546 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:57.546 { 00:07:57.546 "filename": "/tmp/spdk_mem_dump.txt" 00:07:57.546 } 00:07:57.546 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.546 16:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:57.546 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:57.546 1 heaps totaling size 820.000000 MiB 00:07:57.546 size: 820.000000 MiB heap id: 0 00:07:57.546 end heaps---------- 00:07:57.546 8 mempools totaling size 598.116089 MiB 00:07:57.546 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:57.546 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:57.546 size: 84.521057 MiB name: bdev_io_1528274 00:07:57.546 size: 51.011292 MiB name: evtpool_1528274 00:07:57.546 size: 50.003479 MiB name: msgpool_1528274 00:07:57.546 size: 21.763794 MiB name: PDU_Pool 00:07:57.546 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:57.546 size: 0.026123 MiB name: Session_Pool 00:07:57.546 end mempools------- 00:07:57.546 201 memzones totaling size 4.176453 MiB 00:07:57.546 size: 1.000366 MiB name: RG_ring_0_1528274 00:07:57.546 size: 1.000366 MiB name: RG_ring_1_1528274 00:07:57.546 size: 1.000366 MiB name: RG_ring_4_1528274 00:07:57.546 size: 1.000366 MiB name: RG_ring_5_1528274 00:07:57.546 size: 0.125366 MiB name: RG_ring_2_1528274 00:07:57.546 size: 0.015991 MiB name: RG_ring_3_1528274 00:07:57.546 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:57.546 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:57.546 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_0 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_1 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_2 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_3 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_4 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_5 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_6 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_7 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_8 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_9 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_10 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_11 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_12 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_13 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_14 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_15 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_16 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_17 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_18 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_19 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_20 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:57.546 size: 0.000122 MiB name: rte_compressdev_data_21 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:57.546 size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_22 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_23 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_24 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_25 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_26 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_27 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_28 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_29 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_30 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_31 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_32 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_33 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_34 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_35 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_36 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_37 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_38 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_39 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_40 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_41 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_42 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_43 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_44 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_45 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_46 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:57.547 size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:57.547 size: 0.000122 MiB name: rte_compressdev_data_47 00:07:57.547 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:57.547 end memzones------- 00:07:57.547 16:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:57.547 heap id: 0 total size: 820.000000 MiB number of busy elements: 624 number of free elements: 17 00:07:57.547 list of free elements. size: 17.731812 MiB 00:07:57.547 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:57.547 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:57.547 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:57.547 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:57.547 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:57.547 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:57.547 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:57.547 element at address: 0x200019600000 with size: 0.999329 MiB 00:07:57.547 element at address: 0x200003e00000 with size: 0.996338 MiB 00:07:57.547 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:57.547 element at address: 0x200018e00000 with size: 0.959900 MiB 00:07:57.547 element at address: 0x20001b000000 with size: 0.583191 MiB 00:07:57.547 element at address: 0x200019200000 with size: 0.491150 MiB 00:07:57.547 element at address: 0x200019a00000 with size: 0.485657 MiB 00:07:57.547 element at address: 0x200013800000 with size: 0.467651 MiB 00:07:57.547 element at address: 0x200028400000 with size: 0.393372 MiB 00:07:57.547 element at address: 0x200003a00000 with size: 0.372803 MiB 00:07:57.547 list of standard malloc elements. size: 199.934448 MiB 00:07:57.547 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:57.547 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:57.547 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:57.547 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:57.547 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:57.547 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:57.547 element at address: 0x200000207480 with size: 0.062683 MiB 00:07:57.547 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:57.547 element at address: 0x2000003239c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000327740 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000032b4c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000032f240 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000332fc0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000336d40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000033aac0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000033e840 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003425c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000346340 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000034a0c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000034de40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000351bc0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000355940 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003596c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000035d440 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003611c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000364f40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000368cc0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000036ca40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003707c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000374540 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003782c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000037c040 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000037fdc0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000383b40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003878c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000038b640 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000038f3c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000393140 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000396ec0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000039ac40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x20000039e9c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003a2740 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003a64c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003aa240 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003adfc0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003b1d40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003b5ac0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003b9840 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003bd5c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003c1340 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003c50c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003c8e40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003ccbc0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003d0940 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003d46c0 with size: 0.004456 MiB 00:07:57.547 element at address: 0x2000003d8c40 with size: 0.004456 MiB 00:07:57.547 element at address: 0x200000321840 with size: 0.004089 MiB 00:07:57.547 element at address: 0x200000322900 with size: 0.004089 MiB 00:07:57.547 element at address: 0x2000003255c0 with size: 0.004089 MiB 00:07:57.547 element at address: 0x200000326680 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000329340 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000032a400 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000032d0c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000032e180 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000330e40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000331f00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000334bc0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000335c80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000338940 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000339a00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000033c6c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000033d780 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000340440 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000341500 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003441c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000345280 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000347f40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000349000 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000034bcc0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000034cd80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000034fa40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000350b00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003537c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000354880 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000357540 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000358600 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000035b2c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000035c380 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000035f040 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000360100 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000362dc0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000363e80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000366b40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000367c00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000036a8c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000036b980 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000036e640 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000036f700 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003723c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000373480 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000376140 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000377200 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000379ec0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000037af80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000037dc40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000037ed00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003819c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000382a80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000385740 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000386800 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003894c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000038a580 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000038d240 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000038e300 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000390fc0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000392080 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000394d40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000395e00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000398ac0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x200000399b80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000039c840 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000039d900 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003a05c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003a1680 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003a4340 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003a5400 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003a80c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003a9180 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003abe40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003acf00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003afbc0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003b0c80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003b3940 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003b4a00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003b76c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003b8780 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003bb440 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003bc500 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003bf1c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003c0280 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003c2f40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003c4000 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003c6cc0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003c7d80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003caa40 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003cbb00 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003ce7c0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003cf880 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003d2540 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003d3600 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003d6ac0 with size: 0.004089 MiB 00:07:57.548 element at address: 0x2000003d7b80 with size: 0.004089 MiB 00:07:57.548 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:07:57.548 element at address: 0x200000207300 with size: 0.000366 MiB 00:07:57.548 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:57.548 element at address: 0x200000200000 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200100 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200200 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200300 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200400 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200500 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200600 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200700 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200800 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200900 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200a00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200b00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200c00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200d00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200e00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000200f00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201000 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201100 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201200 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201300 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201400 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201500 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201600 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201700 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201800 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201900 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201a00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201b00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201c00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201d00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201e00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000201f00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202000 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202100 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202200 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202300 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202400 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202500 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202600 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202700 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202800 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202900 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202a00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202b00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202c00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202d00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202e00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000202f00 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203000 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203100 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203200 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203300 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203400 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203500 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203600 with size: 0.000244 MiB 00:07:57.548 element at address: 0x200000203700 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203800 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203900 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203a00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203b00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203c00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203d00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203e00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000203f00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204000 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204100 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204200 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204300 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204400 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204500 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204600 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204700 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204800 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204900 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204a00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204b00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204c00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204d00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204e00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000204f00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205000 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205100 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205200 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205300 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205400 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205500 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205600 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205700 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205800 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205900 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205a00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205b00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205c00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205d00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205e00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000205f00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206000 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206100 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206200 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206300 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206400 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206500 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206600 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206700 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206800 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206900 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206a00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206b00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206c00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206d00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206e00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000206f00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000207000 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000207100 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000207200 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217540 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217640 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217740 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217840 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217940 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217a40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217b40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217c40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217d40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217e40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000217f40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000218040 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000218140 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000218240 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000218340 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000218440 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021c780 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021c880 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021c980 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021ca80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021cb80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021cc80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021cd80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021ce80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021cf80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d080 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d180 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d280 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d380 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d480 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d580 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d680 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d780 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021d880 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021db00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021dc00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021dd00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021de00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021df00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e000 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e100 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e200 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e300 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e400 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e500 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e600 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e700 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e800 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021e900 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021ea00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000021eb00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000320d80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000320e80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x2000003210c0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x2000003211c0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000321400 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000324c00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000324e40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000324f40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000325180 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000328980 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000328bc0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000328cc0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000328f00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000032c700 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000032c940 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000032ca40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000032cc80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000330480 with size: 0.000244 MiB 00:07:57.549 element at address: 0x2000003306c0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x2000003307c0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000330a00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000334200 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000334440 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000334540 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000334780 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000337f80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x2000003381c0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x2000003382c0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000338500 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033bd00 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033bf40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033c040 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033c280 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033fa80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033fcc0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x20000033fdc0 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000340000 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000343800 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000343a40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000343b40 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000343d80 with size: 0.000244 MiB 00:07:57.549 element at address: 0x200000347580 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003477c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003478c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000347b00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034b300 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034b540 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034b640 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034b880 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034f080 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034f2c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034f3c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000034f600 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000352e00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000353040 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000353140 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000353380 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000356b80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000356dc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000356ec0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000357100 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035a900 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035ab40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035ac40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035ae80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035e680 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035e8c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035e9c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000035ec00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000362400 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000362640 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000362740 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000362980 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000366180 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003663c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003664c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000366700 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000369f00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036a140 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036a240 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036a480 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036dc80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036dec0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036dfc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000036e200 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000371a00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000371c40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000371d40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000371f80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000375780 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003759c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000375ac0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000375d00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000379500 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000379740 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000379840 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000379a80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000037d280 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000037d4c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000037d5c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000037d800 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000381000 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000381240 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000381340 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000381580 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000384d80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000384fc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003850c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000385300 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000388b00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000388d40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000388e40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000389080 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000038c880 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000038cac0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000038cbc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000038ce00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000390600 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000390840 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000390940 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000390b80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000394380 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003945c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003946c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000394900 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000398100 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000398340 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000398440 with size: 0.000244 MiB 00:07:57.550 element at address: 0x200000398680 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039be80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039c0c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039c1c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039c400 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039fc00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039fe40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x20000039ff40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a0180 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a3980 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a3bc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a3cc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a3f00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a7700 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a7940 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a7a40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003a7c80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003ab480 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003ab6c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003ab7c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003aba00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003af200 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003af440 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003af540 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003af780 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b2f80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b31c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b32c0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b3500 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b6d00 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b6f40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b7040 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003b7280 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003baa80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003bacc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003badc0 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003bb000 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003be800 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003bea40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003beb40 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003bed80 with size: 0.000244 MiB 00:07:57.550 element at address: 0x2000003c2580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c27c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c28c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c2b00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c6300 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c6540 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c6640 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003c6880 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ca080 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ca2c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ca3c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ca600 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003cde00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ce040 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ce140 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003ce380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d1b80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d1dc0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d1ec0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d2100 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d5a00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d61c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d62c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000003d6680 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013877b80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013877c80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200028464b40 with size: 0.000244 MiB 00:07:57.551 element at address: 0x200028464c40 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846b900 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846be80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c080 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c180 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c280 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c480 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c680 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c780 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c880 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846c980 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d080 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d180 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d280 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d480 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:57.551 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:57.551 list of memzone associated elements. size: 602.333740 MiB 00:07:57.551 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:57.551 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:57.551 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:57.551 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:57.551 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:57.551 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1528274_0 00:07:57.551 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:57.551 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1528274_0 00:07:57.551 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:57.551 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1528274_0 00:07:57.551 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:57.551 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:57.551 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:57.551 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:57.552 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:57.552 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1528274 00:07:57.552 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:57.552 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1528274 00:07:57.552 element at address: 0x20000021ec00 with size: 1.008179 MiB 00:07:57.552 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1528274 00:07:57.552 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:57.552 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:57.552 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:57.552 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:57.552 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:57.552 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:57.552 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:57.552 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:57.552 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:57.552 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1528274 00:07:57.552 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:57.552 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1528274 00:07:57.552 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:57.552 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1528274 00:07:57.552 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:57.552 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1528274 00:07:57.552 element at address: 0x200003a5f700 with size: 0.500549 MiB 00:07:57.552 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1528274 00:07:57.552 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:07:57.552 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:57.552 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:57.552 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:57.552 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:07:57.552 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:57.552 element at address: 0x200003adf940 with size: 0.125549 MiB 00:07:57.552 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1528274 00:07:57.552 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:07:57.552 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:57.552 element at address: 0x200028464d40 with size: 0.023804 MiB 00:07:57.552 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:57.552 element at address: 0x200000218540 with size: 0.016174 MiB 00:07:57.552 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1528274 00:07:57.552 element at address: 0x20002846aec0 with size: 0.002502 MiB 00:07:57.552 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:57.552 element at address: 0x2000003d5c40 with size: 0.001343 MiB 00:07:57.552 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:57.552 element at address: 0x2000003d68c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:57.552 element at address: 0x2000003d2340 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:57.552 element at address: 0x2000003ce5c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:57.552 element at address: 0x2000003ca840 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:57.552 element at address: 0x2000003c6ac0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:57.552 element at address: 0x2000003c2d40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:57.552 element at address: 0x2000003befc0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:57.552 element at address: 0x2000003bb240 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:57.552 element at address: 0x2000003b74c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:57.552 element at address: 0x2000003b3740 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:57.552 element at address: 0x2000003af9c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:57.552 element at address: 0x2000003abc40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:57.552 element at address: 0x2000003a7ec0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:57.552 element at address: 0x2000003a4140 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:57.552 element at address: 0x2000003a03c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:57.552 element at address: 0x20000039c640 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:57.552 element at address: 0x2000003988c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:57.552 element at address: 0x200000394b40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:57.552 element at address: 0x200000390dc0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:57.552 element at address: 0x20000038d040 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:57.552 element at address: 0x2000003892c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:57.552 element at address: 0x200000385540 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:57.552 element at address: 0x2000003817c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:57.552 element at address: 0x20000037da40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:57.552 element at address: 0x200000379cc0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:57.552 element at address: 0x200000375f40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:57.552 element at address: 0x2000003721c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:57.552 element at address: 0x20000036e440 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:57.552 element at address: 0x20000036a6c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:57.552 element at address: 0x200000366940 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:57.552 element at address: 0x200000362bc0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:57.552 element at address: 0x20000035ee40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:57.552 element at address: 0x20000035b0c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:57.552 element at address: 0x200000357340 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:57.552 element at address: 0x2000003535c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:57.552 element at address: 0x20000034f840 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:57.552 element at address: 0x20000034bac0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:57.552 element at address: 0x200000347d40 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:57.552 element at address: 0x200000343fc0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:57.552 element at address: 0x200000340240 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:57.552 element at address: 0x20000033c4c0 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:57.552 element at address: 0x200000338740 with size: 0.000488 MiB 00:07:57.552 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:57.552 element at address: 0x2000003349c0 with size: 0.000488 MiB 00:07:57.553 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:57.553 element at address: 0x200000330c40 with size: 0.000488 MiB 00:07:57.553 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:57.553 element at address: 0x20000032cec0 with size: 0.000488 MiB 00:07:57.553 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:57.553 element at address: 0x200000329140 with size: 0.000488 MiB 00:07:57.553 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:57.553 element at address: 0x2000003253c0 with size: 0.000488 MiB 00:07:57.553 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:57.553 element at address: 0x200000321640 with size: 0.000488 MiB 00:07:57.553 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:57.553 element at address: 0x2000003d6500 with size: 0.000366 MiB 00:07:57.553 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:57.553 element at address: 0x20000021d980 with size: 0.000366 MiB 00:07:57.553 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1528274 00:07:57.553 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:57.553 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1528274 00:07:57.553 element at address: 0x20002846ba00 with size: 0.000366 MiB 00:07:57.553 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:57.553 element at address: 0x2000003d6780 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:57.553 element at address: 0x2000003d63c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:57.553 element at address: 0x2000003d5b00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:07:57.553 element at address: 0x2000003d2200 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:57.553 element at address: 0x2000003d1fc0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:57.553 element at address: 0x2000003d1c80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:07:57.553 element at address: 0x2000003ce480 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:57.553 element at address: 0x2000003ce240 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:57.553 element at address: 0x2000003cdf00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:07:57.553 element at address: 0x2000003ca700 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:57.553 element at address: 0x2000003ca4c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:57.553 element at address: 0x2000003ca180 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:07:57.553 element at address: 0x2000003c6980 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:57.553 element at address: 0x2000003c6740 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:57.553 element at address: 0x2000003c6400 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:07:57.553 element at address: 0x2000003c2c00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:57.553 element at address: 0x2000003c29c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:57.553 element at address: 0x2000003c2680 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:07:57.553 element at address: 0x2000003bee80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:57.553 element at address: 0x2000003bec40 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:57.553 element at address: 0x2000003be900 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:07:57.553 element at address: 0x2000003bb100 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:57.553 element at address: 0x2000003baec0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:57.553 element at address: 0x2000003bab80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:07:57.553 element at address: 0x2000003b7380 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:57.553 element at address: 0x2000003b7140 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:57.553 element at address: 0x2000003b6e00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:07:57.553 element at address: 0x2000003b3600 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:57.553 element at address: 0x2000003b33c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:57.553 element at address: 0x2000003b3080 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:07:57.553 element at address: 0x2000003af880 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:57.553 element at address: 0x2000003af640 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:57.553 element at address: 0x2000003af300 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:07:57.553 element at address: 0x2000003abb00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:57.553 element at address: 0x2000003ab8c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:57.553 element at address: 0x2000003ab580 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:07:57.553 element at address: 0x2000003a7d80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:57.553 element at address: 0x2000003a7b40 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:57.553 element at address: 0x2000003a7800 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:07:57.553 element at address: 0x2000003a4000 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:57.553 element at address: 0x2000003a3dc0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:57.553 element at address: 0x2000003a3a80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:07:57.553 element at address: 0x2000003a0280 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:57.553 element at address: 0x2000003a0040 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:57.553 element at address: 0x20000039fd00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:07:57.553 element at address: 0x20000039c500 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:57.553 element at address: 0x20000039c2c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:57.553 element at address: 0x20000039bf80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:07:57.553 element at address: 0x200000398780 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:57.553 element at address: 0x200000398540 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:57.553 element at address: 0x200000398200 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:07:57.553 element at address: 0x200000394a00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:57.553 element at address: 0x2000003947c0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:57.553 element at address: 0x200000394480 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:07:57.553 element at address: 0x200000390c80 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:57.553 element at address: 0x200000390a40 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:57.553 element at address: 0x200000390700 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:07:57.553 element at address: 0x20000038cf00 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:57.553 element at address: 0x20000038ccc0 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:57.553 element at address: 0x20000038c980 with size: 0.000305 MiB 00:07:57.553 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:07:57.554 element at address: 0x200000389180 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:57.554 element at address: 0x200000388f40 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:57.554 element at address: 0x200000388c00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:07:57.554 element at address: 0x200000385400 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:57.554 element at address: 0x2000003851c0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:57.554 element at address: 0x200000384e80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:07:57.554 element at address: 0x200000381680 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:57.554 element at address: 0x200000381440 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:57.554 element at address: 0x200000381100 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:07:57.554 element at address: 0x20000037d900 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:57.554 element at address: 0x20000037d6c0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:57.554 element at address: 0x20000037d380 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:07:57.554 element at address: 0x200000379b80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:57.554 element at address: 0x200000379940 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:57.554 element at address: 0x200000379600 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:07:57.554 element at address: 0x200000375e00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:57.554 element at address: 0x200000375bc0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:57.554 element at address: 0x200000375880 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:07:57.554 element at address: 0x200000372080 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:57.554 element at address: 0x200000371e40 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:57.554 element at address: 0x200000371b00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:07:57.554 element at address: 0x20000036e300 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:57.554 element at address: 0x20000036e0c0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:57.554 element at address: 0x20000036dd80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:07:57.554 element at address: 0x20000036a580 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:57.554 element at address: 0x20000036a340 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:57.554 element at address: 0x20000036a000 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:07:57.554 element at address: 0x200000366800 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:57.554 element at address: 0x2000003665c0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:57.554 element at address: 0x200000366280 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:07:57.554 element at address: 0x200000362a80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:57.554 element at address: 0x200000362840 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:57.554 element at address: 0x200000362500 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:07:57.554 element at address: 0x20000035ed00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:57.554 element at address: 0x20000035eac0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:57.554 element at address: 0x20000035e780 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:07:57.554 element at address: 0x20000035af80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:57.554 element at address: 0x20000035ad40 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:57.554 element at address: 0x20000035aa00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:07:57.554 element at address: 0x200000357200 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:57.554 element at address: 0x200000356fc0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:57.554 element at address: 0x200000356c80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:07:57.554 element at address: 0x200000353480 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:57.554 element at address: 0x200000353240 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:57.554 element at address: 0x200000352f00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:07:57.554 element at address: 0x20000034f700 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:57.554 element at address: 0x20000034f4c0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:57.554 element at address: 0x20000034f180 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:07:57.554 element at address: 0x20000034b980 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:57.554 element at address: 0x20000034b740 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:57.554 element at address: 0x20000034b400 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:07:57.554 element at address: 0x200000347c00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:57.554 element at address: 0x2000003479c0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:57.554 element at address: 0x200000347680 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:07:57.554 element at address: 0x200000343e80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:57.554 element at address: 0x200000343c40 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:57.554 element at address: 0x200000343900 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:07:57.554 element at address: 0x200000340100 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:57.554 element at address: 0x20000033fec0 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:57.554 element at address: 0x20000033fb80 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:07:57.554 element at address: 0x20000033c380 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:57.554 element at address: 0x20000033c140 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:57.554 element at address: 0x20000033be00 with size: 0.000305 MiB 00:07:57.554 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:07:57.554 element at address: 0x200000338600 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:57.555 element at address: 0x2000003383c0 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:57.555 element at address: 0x200000338080 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:07:57.555 element at address: 0x200000334880 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:57.555 element at address: 0x200000334640 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:57.555 element at address: 0x200000334300 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:07:57.555 element at address: 0x200000330b00 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:57.555 element at address: 0x2000003308c0 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:57.555 element at address: 0x200000330580 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:07:57.555 element at address: 0x20000032cd80 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:57.555 element at address: 0x20000032cb40 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:57.555 element at address: 0x20000032c800 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:07:57.555 element at address: 0x200000329000 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:57.555 element at address: 0x200000328dc0 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:57.555 element at address: 0x200000328a80 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:07:57.555 element at address: 0x200000325280 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:57.555 element at address: 0x200000325040 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:57.555 element at address: 0x200000324d00 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:07:57.555 element at address: 0x200000321500 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:57.555 element at address: 0x2000003212c0 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:57.555 element at address: 0x200000320f80 with size: 0.000305 MiB 00:07:57.555 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:07:57.555 element at address: 0x2000003d5900 with size: 0.000244 MiB 00:07:57.555 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:57.555 16:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:57.555 16:24:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1528274 00:07:57.555 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1528274 ']' 00:07:57.555 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1528274 00:07:57.555 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:57.555 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.555 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528274 00:07:57.814 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.814 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.814 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528274' 00:07:57.814 killing process with pid 1528274 00:07:57.814 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1528274 00:07:57.814 16:24:54 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1528274 00:08:01.100 00:08:01.100 real 0m5.461s 00:08:01.100 user 0m5.401s 00:08:01.100 sys 0m0.714s 00:08:01.100 16:24:57 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.100 16:24:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:01.100 ************************************ 00:08:01.100 END TEST dpdk_mem_utility 00:08:01.100 ************************************ 00:08:01.100 16:24:57 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:08:01.100 16:24:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.100 16:24:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.100 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:08:01.100 ************************************ 00:08:01.100 START TEST event 00:08:01.100 ************************************ 00:08:01.100 16:24:57 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:08:01.100 * Looking for test storage... 00:08:01.100 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:08:01.100 16:24:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:01.100 16:24:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:01.100 16:24:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:01.100 16:24:57 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:01.100 16:24:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.100 16:24:57 event -- common/autotest_common.sh@10 -- # set +x 00:08:01.100 ************************************ 00:08:01.100 START TEST event_perf 00:08:01.100 ************************************ 00:08:01.100 16:24:57 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:01.100 Running I/O for 1 seconds...[2024-07-24 16:24:57.907015] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:01.100 [2024-07-24 16:24:57.907121] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529375 ] 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:01.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:01.359 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:01.359 [2024-07-24 16:24:58.123426] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.618 [2024-07-24 16:24:58.415600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.618 [2024-07-24 16:24:58.415673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.618 [2024-07-24 16:24:58.415739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.618 [2024-07-24 16:24:58.415747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.519 Running I/O for 1 seconds... 00:08:03.519 lcore 0: 178633 00:08:03.519 lcore 1: 178631 00:08:03.519 lcore 2: 178631 00:08:03.519 lcore 3: 178634 00:08:03.519 done. 00:08:03.519 00:08:03.519 real 0m2.122s 00:08:03.519 user 0m4.864s 00:08:03.519 sys 0m0.250s 00:08:03.519 16:24:59 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.519 16:24:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:03.519 ************************************ 00:08:03.519 END TEST event_perf 00:08:03.519 ************************************ 00:08:03.519 16:25:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:03.519 16:25:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:03.519 16:25:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.519 16:25:00 event -- common/autotest_common.sh@10 -- # set +x 00:08:03.519 ************************************ 00:08:03.519 START TEST event_reactor 00:08:03.519 ************************************ 00:08:03.519 16:25:00 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:03.519 [2024-07-24 16:25:00.116365] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:03.519 [2024-07-24 16:25:00.116476] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529667 ] 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:03.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.519 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:03.520 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:03.520 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:03.520 [2024-07-24 16:25:00.340042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.778 [2024-07-24 16:25:00.626644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.683 test_start 00:08:05.683 oneshot 00:08:05.683 tick 100 00:08:05.683 tick 100 00:08:05.683 tick 250 00:08:05.683 tick 100 00:08:05.683 tick 100 00:08:05.683 tick 100 00:08:05.683 tick 250 00:08:05.683 tick 500 00:08:05.683 tick 100 00:08:05.683 tick 100 00:08:05.683 tick 250 00:08:05.683 tick 100 00:08:05.683 tick 100 00:08:05.683 test_end 00:08:05.683 00:08:05.683 real 0m2.107s 00:08:05.683 user 0m1.861s 00:08:05.683 sys 0m0.235s 00:08:05.683 16:25:02 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.683 16:25:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:05.683 ************************************ 00:08:05.683 END TEST event_reactor 00:08:05.683 ************************************ 00:08:05.684 16:25:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:05.684 16:25:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:05.684 16:25:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.684 16:25:02 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.684 ************************************ 00:08:05.684 START TEST event_reactor_perf 00:08:05.684 ************************************ 00:08:05.684 16:25:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:05.684 [2024-07-24 16:25:02.303583] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:05.684 [2024-07-24 16:25:02.303696] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530116 ] 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:05.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.684 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:05.684 [2024-07-24 16:25:02.525630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.943 [2024-07-24 16:25:02.793154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.846 test_start 00:08:07.846 test_end 00:08:07.846 Performance: 274105 events per second 00:08:07.846 00:08:07.846 real 0m2.106s 00:08:07.846 user 0m1.848s 00:08:07.846 sys 0m0.247s 00:08:07.846 16:25:04 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.846 16:25:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.846 ************************************ 00:08:07.846 END TEST event_reactor_perf 00:08:07.846 ************************************ 00:08:07.846 16:25:04 event -- event/event.sh@49 -- # uname -s 00:08:07.846 16:25:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:07.846 16:25:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:07.846 16:25:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.846 16:25:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.846 16:25:04 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.846 ************************************ 00:08:07.846 START TEST event_scheduler 00:08:07.846 ************************************ 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:07.846 * Looking for test storage... 00:08:07.846 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:08:07.846 16:25:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:07.846 16:25:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1530527 00:08:07.846 16:25:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:07.846 16:25:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:07.846 16:25:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1530527 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1530527 ']' 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.846 16:25:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:07.846 [2024-07-24 16:25:04.660418] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:07.846 [2024-07-24 16:25:04.660543] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530527 ] 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:08.106 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:08.106 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:08.106 [2024-07-24 16:25:04.847646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.381 [2024-07-24 16:25:05.057628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.381 [2024-07-24 16:25:05.057670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.381 [2024-07-24 16:25:05.057727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.381 [2024-07-24 16:25:05.057735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:08.968 16:25:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:08.968 [2024-07-24 16:25:05.535990] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:08.968 [2024-07-24 16:25:05.536021] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:08.968 [2024-07-24 16:25:05.536038] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:08.968 [2024-07-24 16:25:05.536051] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:08.968 [2024-07-24 16:25:05.536060] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.968 16:25:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.968 16:25:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:09.227 [2024-07-24 16:25:05.888877] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:09.227 16:25:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.227 16:25:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:09.227 16:25:05 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.227 16:25:05 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.227 16:25:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:09.227 ************************************ 00:08:09.227 START TEST scheduler_create_thread 00:08:09.227 ************************************ 00:08:09.227 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:09.227 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:09.227 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.227 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.227 2 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 3 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 4 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 5 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 6 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 7 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 8 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 9 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 10 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.228 16:25:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.131 16:25:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.131 16:25:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:11.131 16:25:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:11.131 16:25:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.131 16:25:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.697 16:25:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.697 00:08:11.697 real 0m2.625s 00:08:11.697 user 0m0.022s 00:08:11.697 sys 0m0.009s 00:08:11.955 16:25:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.955 16:25:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:11.955 ************************************ 00:08:11.955 END TEST scheduler_create_thread 00:08:11.955 ************************************ 00:08:11.955 16:25:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:11.955 16:25:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1530527 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1530527 ']' 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1530527 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530527 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530527' 00:08:11.955 killing process with pid 1530527 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1530527 00:08:11.955 16:25:08 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1530527 00:08:12.214 [2024-07-24 16:25:09.037843] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:13.592 00:08:13.592 real 0m5.879s 00:08:13.592 user 0m9.769s 00:08:13.592 sys 0m0.637s 00:08:13.592 16:25:10 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.592 16:25:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:13.592 ************************************ 00:08:13.592 END TEST event_scheduler 00:08:13.592 ************************************ 00:08:13.592 16:25:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:13.592 16:25:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:13.592 16:25:10 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.592 16:25:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.592 16:25:10 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.592 ************************************ 00:08:13.592 START TEST app_repeat 00:08:13.592 ************************************ 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1531634 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1531634' 00:08:13.592 Process app_repeat pid: 1531634 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:13.592 spdk_app_start Round 0 00:08:13.592 16:25:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1531634 /var/tmp/spdk-nbd.sock 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1531634 ']' 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:13.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.592 16:25:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:13.851 [2024-07-24 16:25:10.490711] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:13.851 [2024-07-24 16:25:10.490820] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531634 ] 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.851 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:13.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.852 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:13.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.852 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:13.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.852 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:13.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.852 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:14.111 [2024-07-24 16:25:10.719519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:14.370 [2024-07-24 16:25:11.010276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.370 [2024-07-24 16:25:11.010283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.937 16:25:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:14.937 16:25:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:14.937 16:25:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:15.195 Malloc0 00:08:15.195 16:25:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:15.454 Malloc1 00:08:15.454 16:25:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.454 16:25:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:15.713 /dev/nbd0 00:08:15.713 16:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:15.713 16:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:15.713 1+0 records in 00:08:15.713 1+0 records out 00:08:15.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002583 s, 15.9 MB/s 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:15.713 16:25:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:15.713 16:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.713 16:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.713 16:25:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:15.972 /dev/nbd1 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:15.972 1+0 records in 00:08:15.972 1+0 records out 00:08:15.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262542 s, 15.6 MB/s 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:15.972 16:25:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.972 16:25:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:16.231 { 00:08:16.231 "nbd_device": "/dev/nbd0", 00:08:16.231 "bdev_name": "Malloc0" 00:08:16.231 }, 00:08:16.231 { 00:08:16.231 "nbd_device": "/dev/nbd1", 00:08:16.231 "bdev_name": "Malloc1" 00:08:16.231 } 00:08:16.231 ]' 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:16.231 { 00:08:16.231 "nbd_device": "/dev/nbd0", 00:08:16.231 "bdev_name": "Malloc0" 00:08:16.231 }, 00:08:16.231 { 00:08:16.231 "nbd_device": "/dev/nbd1", 00:08:16.231 "bdev_name": "Malloc1" 00:08:16.231 } 00:08:16.231 ]' 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:16.231 /dev/nbd1' 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:16.231 /dev/nbd1' 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:16.231 16:25:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:16.232 256+0 records in 00:08:16.232 256+0 records out 00:08:16.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113766 s, 92.2 MB/s 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:16.232 256+0 records in 00:08:16.232 256+0 records out 00:08:16.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201695 s, 52.0 MB/s 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:16.232 256+0 records in 00:08:16.232 256+0 records out 00:08:16.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237863 s, 44.1 MB/s 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:16.232 16:25:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.232 16:25:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:16.491 16:25:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:16.749 16:25:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:17.007 16:25:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:17.007 16:25:13 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:17.574 16:25:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:19.477 [2024-07-24 16:25:16.187614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.736 [2024-07-24 16:25:16.460766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.736 [2024-07-24 16:25:16.460768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.995 [2024-07-24 16:25:16.756326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:19.995 [2024-07-24 16:25:16.756382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:20.562 16:25:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:20.562 16:25:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:20.562 spdk_app_start Round 1 00:08:20.562 16:25:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1531634 /var/tmp/spdk-nbd.sock 00:08:20.562 16:25:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1531634 ']' 00:08:20.562 16:25:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:20.562 16:25:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.562 16:25:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:20.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:20.562 16:25:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.562 16:25:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:20.821 16:25:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.821 16:25:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:20.821 16:25:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:21.078 Malloc0 00:08:21.078 16:25:17 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:21.337 Malloc1 00:08:21.337 16:25:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:21.337 16:25:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:21.595 /dev/nbd0 00:08:21.595 16:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:21.595 16:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:21.595 1+0 records in 00:08:21.595 1+0 records out 00:08:21.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246154 s, 16.6 MB/s 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:21.595 16:25:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:21.595 16:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.595 16:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:21.595 16:25:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:21.854 /dev/nbd1 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:21.854 1+0 records in 00:08:21.854 1+0 records out 00:08:21.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263866 s, 15.5 MB/s 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:21.854 16:25:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.854 16:25:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:22.114 { 00:08:22.114 "nbd_device": "/dev/nbd0", 00:08:22.114 "bdev_name": "Malloc0" 00:08:22.114 }, 00:08:22.114 { 00:08:22.114 "nbd_device": "/dev/nbd1", 00:08:22.114 "bdev_name": "Malloc1" 00:08:22.114 } 00:08:22.114 ]' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:22.114 { 00:08:22.114 "nbd_device": "/dev/nbd0", 00:08:22.114 "bdev_name": "Malloc0" 00:08:22.114 }, 00:08:22.114 { 00:08:22.114 "nbd_device": "/dev/nbd1", 00:08:22.114 "bdev_name": "Malloc1" 00:08:22.114 } 00:08:22.114 ]' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:22.114 /dev/nbd1' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:22.114 /dev/nbd1' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:22.114 16:25:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:22.373 256+0 records in 00:08:22.373 256+0 records out 00:08:22.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104317 s, 101 MB/s 00:08:22.373 16:25:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:22.373 16:25:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:22.373 256+0 records in 00:08:22.373 256+0 records out 00:08:22.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202639 s, 51.7 MB/s 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:22.373 256+0 records in 00:08:22.373 256+0 records out 00:08:22.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240684 s, 43.6 MB/s 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.373 16:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.632 16:25:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.891 16:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:23.150 16:25:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:23.150 16:25:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:23.717 16:25:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:25.659 [2024-07-24 16:25:22.217446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.659 [2024-07-24 16:25:22.500850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.659 [2024-07-24 16:25:22.500853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.228 [2024-07-24 16:25:22.819892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:26.228 [2024-07-24 16:25:22.819962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:26.486 16:25:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:26.486 16:25:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:26.486 spdk_app_start Round 2 00:08:26.486 16:25:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1531634 /var/tmp/spdk-nbd.sock 00:08:26.487 16:25:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1531634 ']' 00:08:26.487 16:25:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:26.487 16:25:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.487 16:25:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:26.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:26.487 16:25:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.487 16:25:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:26.745 16:25:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.745 16:25:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:26.745 16:25:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:27.312 Malloc0 00:08:27.312 16:25:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:27.571 Malloc1 00:08:27.571 16:25:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:27.571 /dev/nbd0 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.571 16:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.571 16:25:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:27.571 16:25:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:27.571 16:25:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:27.571 16:25:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:27.571 16:25:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:27.829 1+0 records in 00:08:27.829 1+0 records out 00:08:27.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358198 s, 11.4 MB/s 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:27.829 16:25:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:27.829 16:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.829 16:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:27.829 16:25:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:27.829 /dev/nbd1 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:28.088 1+0 records in 00:08:28.088 1+0 records out 00:08:28.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260794 s, 15.7 MB/s 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:28.088 16:25:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:28.088 { 00:08:28.088 "nbd_device": "/dev/nbd0", 00:08:28.088 "bdev_name": "Malloc0" 00:08:28.088 }, 00:08:28.088 { 00:08:28.088 "nbd_device": "/dev/nbd1", 00:08:28.088 "bdev_name": "Malloc1" 00:08:28.088 } 00:08:28.088 ]' 00:08:28.088 16:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:28.089 { 00:08:28.089 "nbd_device": "/dev/nbd0", 00:08:28.089 "bdev_name": "Malloc0" 00:08:28.089 }, 00:08:28.089 { 00:08:28.089 "nbd_device": "/dev/nbd1", 00:08:28.089 "bdev_name": "Malloc1" 00:08:28.089 } 00:08:28.089 ]' 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:28.089 /dev/nbd1' 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:28.089 /dev/nbd1' 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:28.089 16:25:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:28.347 256+0 records in 00:08:28.347 256+0 records out 00:08:28.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113277 s, 92.6 MB/s 00:08:28.347 16:25:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.347 16:25:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:28.347 256+0 records in 00:08:28.347 256+0 records out 00:08:28.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205091 s, 51.1 MB/s 00:08:28.347 16:25:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.347 16:25:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:28.347 256+0 records in 00:08:28.347 256+0 records out 00:08:28.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242816 s, 43.2 MB/s 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:28.347 16:25:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.348 16:25:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:28.605 16:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:28.605 16:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.606 16:25:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.864 16:25:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.865 16:25:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:29.123 16:25:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:29.123 16:25:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:29.691 16:25:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:31.593 [2024-07-24 16:25:28.282697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.852 [2024-07-24 16:25:28.555488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.852 [2024-07-24 16:25:28.555490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.110 [2024-07-24 16:25:28.873600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:32.110 [2024-07-24 16:25:28.873654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:32.676 16:25:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1531634 /var/tmp/spdk-nbd.sock 00:08:32.676 16:25:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1531634 ']' 00:08:32.676 16:25:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:32.676 16:25:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.676 16:25:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:32.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:32.676 16:25:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.676 16:25:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:32.934 16:25:29 event.app_repeat -- event/event.sh@39 -- # killprocess 1531634 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1531634 ']' 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1531634 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1531634 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1531634' 00:08:32.934 killing process with pid 1531634 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1531634 00:08:32.934 16:25:29 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1531634 00:08:34.836 spdk_app_start is called in Round 0. 00:08:34.836 Shutdown signal received, stop current app iteration 00:08:34.836 Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 reinitialization... 00:08:34.836 spdk_app_start is called in Round 1. 00:08:34.836 Shutdown signal received, stop current app iteration 00:08:34.837 Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 reinitialization... 00:08:34.837 spdk_app_start is called in Round 2. 00:08:34.837 Shutdown signal received, stop current app iteration 00:08:34.837 Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 reinitialization... 00:08:34.837 spdk_app_start is called in Round 3. 00:08:34.837 Shutdown signal received, stop current app iteration 00:08:34.837 16:25:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:34.837 16:25:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:34.837 00:08:34.837 real 0m20.794s 00:08:34.837 user 0m41.390s 00:08:34.837 sys 0m3.694s 00:08:34.837 16:25:31 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.837 16:25:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.837 ************************************ 00:08:34.837 END TEST app_repeat 00:08:34.837 ************************************ 00:08:34.837 16:25:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:34.837 00:08:34.837 real 0m33.522s 00:08:34.837 user 0m59.932s 00:08:34.837 sys 0m5.414s 00:08:34.837 16:25:31 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.837 16:25:31 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.837 ************************************ 00:08:34.837 END TEST event 00:08:34.837 ************************************ 00:08:34.837 16:25:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:34.837 16:25:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.837 16:25:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.837 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:08:34.837 ************************************ 00:08:34.837 START TEST thread 00:08:34.837 ************************************ 00:08:34.837 16:25:31 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:34.837 * Looking for test storage... 00:08:34.837 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:08:34.837 16:25:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.837 16:25:31 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:34.837 16:25:31 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.837 16:25:31 thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.837 ************************************ 00:08:34.837 START TEST thread_poller_perf 00:08:34.837 ************************************ 00:08:34.837 16:25:31 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:34.837 [2024-07-24 16:25:31.534241] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:34.837 [2024-07-24 16:25:31.534349] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535383 ] 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:34.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:34.837 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:35.096 [2024-07-24 16:25:31.760869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.355 [2024-07-24 16:25:32.047078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.355 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:37.258 ====================================== 00:08:37.258 busy:2517834998 (cyc) 00:08:37.258 total_run_count: 281000 00:08:37.258 tsc_hz: 2500000000 (cyc) 00:08:37.258 ====================================== 00:08:37.258 poller_cost: 8960 (cyc), 3584 (nsec) 00:08:37.258 00:08:37.258 real 0m2.133s 00:08:37.258 user 0m1.881s 00:08:37.258 sys 0m0.240s 00:08:37.258 16:25:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.258 16:25:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:37.258 ************************************ 00:08:37.258 END TEST thread_poller_perf 00:08:37.258 ************************************ 00:08:37.258 16:25:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:37.258 16:25:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:37.258 16:25:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.258 16:25:33 thread -- common/autotest_common.sh@10 -- # set +x 00:08:37.258 ************************************ 00:08:37.258 START TEST thread_poller_perf 00:08:37.258 ************************************ 00:08:37.258 16:25:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:37.258 [2024-07-24 16:25:33.746730] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:37.258 [2024-07-24 16:25:33.746810] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535843 ] 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:37.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:37.258 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:37.258 [2024-07-24 16:25:33.946392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.517 [2024-07-24 16:25:34.231928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.517 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:39.422 ====================================== 00:08:39.422 busy:2504480056 (cyc) 00:08:39.422 total_run_count: 3657000 00:08:39.422 tsc_hz: 2500000000 (cyc) 00:08:39.422 ====================================== 00:08:39.422 poller_cost: 684 (cyc), 273 (nsec) 00:08:39.422 00:08:39.422 real 0m2.098s 00:08:39.422 user 0m1.859s 00:08:39.422 sys 0m0.229s 00:08:39.422 16:25:35 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.422 16:25:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:39.422 ************************************ 00:08:39.422 END TEST thread_poller_perf 00:08:39.422 ************************************ 00:08:39.422 16:25:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:39.422 00:08:39.422 real 0m4.499s 00:08:39.422 user 0m3.837s 00:08:39.422 sys 0m0.663s 00:08:39.422 16:25:35 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.422 16:25:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:39.422 ************************************ 00:08:39.422 END TEST thread 00:08:39.422 ************************************ 00:08:39.422 16:25:35 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:08:39.422 16:25:35 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:39.422 16:25:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.422 16:25:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.422 16:25:35 -- common/autotest_common.sh@10 -- # set +x 00:08:39.422 ************************************ 00:08:39.422 START TEST accel 00:08:39.422 ************************************ 00:08:39.422 16:25:35 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:39.422 * Looking for test storage... 00:08:39.422 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:08:39.422 16:25:36 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:39.422 16:25:36 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:39.422 16:25:36 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:39.422 16:25:36 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1536205 00:08:39.422 16:25:36 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:39.422 16:25:36 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:39.422 16:25:36 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.422 16:25:36 accel -- accel/accel.sh@63 -- # waitforlisten 1536205 00:08:39.422 16:25:36 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.422 16:25:36 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.422 16:25:36 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.422 16:25:36 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.422 16:25:36 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:39.422 16:25:36 accel -- common/autotest_common.sh@831 -- # '[' -z 1536205 ']' 00:08:39.422 16:25:36 accel -- accel/accel.sh@41 -- # jq -r . 00:08:39.422 16:25:36 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.422 16:25:36 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.422 16:25:36 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.422 16:25:36 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.422 16:25:36 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.422 [2024-07-24 16:25:36.134247] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:39.422 [2024-07-24 16:25:36.134372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536205 ] 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:39.681 [2024-07-24 16:25:36.357957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.940 [2024-07-24 16:25:36.645368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@864 -- # return 0 00:08:41.318 16:25:37 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:41.318 16:25:37 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:41.318 16:25:37 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:41.318 16:25:37 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:41.318 16:25:37 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:41.318 16:25:37 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.318 16:25:37 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # IFS== 00:08:41.318 16:25:37 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:41.318 16:25:37 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:41.318 16:25:37 accel -- accel/accel.sh@75 -- # killprocess 1536205 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@950 -- # '[' -z 1536205 ']' 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@954 -- # kill -0 1536205 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@955 -- # uname 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.318 16:25:37 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536205 00:08:41.318 16:25:38 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.318 16:25:38 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.318 16:25:38 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536205' 00:08:41.318 killing process with pid 1536205 00:08:41.318 16:25:38 accel -- common/autotest_common.sh@969 -- # kill 1536205 00:08:41.318 16:25:38 accel -- common/autotest_common.sh@974 -- # wait 1536205 00:08:44.637 16:25:41 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:44.637 16:25:41 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:44.637 16:25:41 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:44.637 16:25:41 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.637 16:25:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.637 16:25:41 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:44.637 16:25:41 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:44.637 16:25:41 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.637 16:25:41 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:44.896 16:25:41 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:44.896 16:25:41 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:44.896 16:25:41 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.896 16:25:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.896 ************************************ 00:08:44.896 START TEST accel_missing_filename 00:08:44.896 ************************************ 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.896 16:25:41 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:44.896 16:25:41 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:44.896 [2024-07-24 16:25:41.618930] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:44.896 [2024-07-24 16:25:41.619034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537263 ] 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:44.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.896 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:45.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.155 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:45.155 [2024-07-24 16:25:41.846311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.414 [2024-07-24 16:25:42.125081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.672 [2024-07-24 16:25:42.454420] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.608 [2024-07-24 16:25:43.188048] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:46.866 A filename is required. 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:47.134 00:08:47.134 real 0m2.183s 00:08:47.134 user 0m1.891s 00:08:47.134 sys 0m0.311s 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.134 16:25:43 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 ************************************ 00:08:47.134 END TEST accel_missing_filename 00:08:47.134 ************************************ 00:08:47.134 16:25:43 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:47.134 16:25:43 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:47.134 16:25:43 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.134 16:25:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.134 ************************************ 00:08:47.134 START TEST accel_compress_verify 00:08:47.134 ************************************ 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:47.134 16:25:43 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:47.134 16:25:43 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:47.134 [2024-07-24 16:25:43.886084] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:47.134 [2024-07-24 16:25:43.886199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537597 ] 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:47.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.396 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:47.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.397 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:47.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.397 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:47.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.397 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:47.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.397 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:47.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.397 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:47.397 [2024-07-24 16:25:44.113816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.655 [2024-07-24 16:25:44.401256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.914 [2024-07-24 16:25:44.727170] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.851 [2024-07-24 16:25:45.430077] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:49.110 00:08:49.110 Compression does not support the verify option, aborting. 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.369 00:08:49.369 real 0m2.153s 00:08:49.369 user 0m1.871s 00:08:49.369 sys 0m0.304s 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.369 16:25:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:49.369 ************************************ 00:08:49.369 END TEST accel_compress_verify 00:08:49.369 ************************************ 00:08:49.369 16:25:46 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:49.369 16:25:46 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:49.369 16:25:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.369 16:25:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.369 ************************************ 00:08:49.369 START TEST accel_wrong_workload 00:08:49.369 ************************************ 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.369 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:49.369 16:25:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:49.369 Unsupported workload type: foobar 00:08:49.370 [2024-07-24 16:25:46.113627] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:49.370 accel_perf options: 00:08:49.370 [-h help message] 00:08:49.370 [-q queue depth per core] 00:08:49.370 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:49.370 [-T number of threads per core 00:08:49.370 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:49.370 [-t time in seconds] 00:08:49.370 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:49.370 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:49.370 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:49.370 [-l for compress/decompress workloads, name of uncompressed input file 00:08:49.370 [-S for crc32c workload, use this seed value (default 0) 00:08:49.370 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:49.370 [-f for fill workload, use this BYTE value (default 255) 00:08:49.370 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:49.370 [-y verify result if this switch is on] 00:08:49.370 [-a tasks to allocate per core (default: same value as -q)] 00:08:49.370 Can be used to spread operations across a wider range of memory. 00:08:49.370 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:08:49.370 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.370 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.370 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.370 00:08:49.370 real 0m0.087s 00:08:49.370 user 0m0.078s 00:08:49.370 sys 0m0.052s 00:08:49.370 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.370 16:25:46 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:49.370 ************************************ 00:08:49.370 END TEST accel_wrong_workload 00:08:49.370 ************************************ 00:08:49.370 16:25:46 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:49.370 16:25:46 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:49.370 16:25:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.370 16:25:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.370 ************************************ 00:08:49.370 START TEST accel_negative_buffers 00:08:49.370 ************************************ 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:49.370 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:49.370 16:25:46 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:49.629 -x option must be non-negative. 00:08:49.629 [2024-07-24 16:25:46.281421] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:49.629 accel_perf options: 00:08:49.629 [-h help message] 00:08:49.629 [-q queue depth per core] 00:08:49.629 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:49.629 [-T number of threads per core 00:08:49.629 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:49.629 [-t time in seconds] 00:08:49.629 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:49.629 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:49.629 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:49.629 [-l for compress/decompress workloads, name of uncompressed input file 00:08:49.629 [-S for crc32c workload, use this seed value (default 0) 00:08:49.629 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:49.629 [-f for fill workload, use this BYTE value (default 255) 00:08:49.629 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:49.629 [-y verify result if this switch is on] 00:08:49.629 [-a tasks to allocate per core (default: same value as -q)] 00:08:49.629 Can be used to spread operations across a wider range of memory. 00:08:49.629 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:08:49.629 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:49.629 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:49.629 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:49.629 00:08:49.629 real 0m0.081s 00:08:49.629 user 0m0.066s 00:08:49.629 sys 0m0.056s 00:08:49.629 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.629 16:25:46 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 ************************************ 00:08:49.629 END TEST accel_negative_buffers 00:08:49.629 ************************************ 00:08:49.629 16:25:46 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:49.629 16:25:46 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:49.629 16:25:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.629 16:25:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 ************************************ 00:08:49.629 START TEST accel_crc32c 00:08:49.629 ************************************ 00:08:49.629 16:25:46 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:49.629 16:25:46 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:49.629 [2024-07-24 16:25:46.441794] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:49.629 [2024-07-24 16:25:46.441891] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538142 ] 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:49.889 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.889 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:49.889 [2024-07-24 16:25:46.666093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.148 [2024-07-24 16:25:46.947669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.716 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:50.717 16:25:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:53.251 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:53.252 16:25:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:53.252 00:08:53.252 real 0m3.218s 00:08:53.252 user 0m2.923s 00:08:53.252 sys 0m0.293s 00:08:53.252 16:25:49 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.252 16:25:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:53.252 ************************************ 00:08:53.252 END TEST accel_crc32c 00:08:53.252 ************************************ 00:08:53.252 16:25:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:53.252 16:25:49 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:53.252 16:25:49 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.252 16:25:49 accel -- common/autotest_common.sh@10 -- # set +x 00:08:53.252 ************************************ 00:08:53.252 START TEST accel_crc32c_C2 00:08:53.252 ************************************ 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:53.252 16:25:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:53.252 [2024-07-24 16:25:49.742402] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:53.252 [2024-07-24 16:25:49.742503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538689 ] 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:53.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.252 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:53.252 [2024-07-24 16:25:49.967202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.511 [2024-07-24 16:25:50.254227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.770 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:53.771 16:25:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:56.315 00:08:56.315 real 0m3.192s 00:08:56.315 user 0m2.903s 00:08:56.315 sys 0m0.288s 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.315 16:25:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:56.315 ************************************ 00:08:56.315 END TEST accel_crc32c_C2 00:08:56.315 ************************************ 00:08:56.315 16:25:52 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:56.315 16:25:52 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:56.315 16:25:52 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.315 16:25:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:56.315 ************************************ 00:08:56.315 START TEST accel_copy 00:08:56.315 ************************************ 00:08:56.316 16:25:52 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:56.316 16:25:52 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:56.316 [2024-07-24 16:25:53.013543] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:56.316 [2024-07-24 16:25:53.013647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539235 ] 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:56.316 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:56.316 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:56.576 [2024-07-24 16:25:53.239988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.837 [2024-07-24 16:25:53.527017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.095 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:57.096 16:25:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:59.761 16:25:56 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.761 00:08:59.761 real 0m3.206s 00:08:59.761 user 0m2.890s 00:08:59.761 sys 0m0.317s 00:08:59.761 16:25:56 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.761 16:25:56 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:59.761 ************************************ 00:08:59.761 END TEST accel_copy 00:08:59.761 ************************************ 00:08:59.761 16:25:56 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:59.761 16:25:56 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:59.761 16:25:56 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.761 16:25:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.761 ************************************ 00:08:59.761 START TEST accel_fill 00:08:59.761 ************************************ 00:08:59.761 16:25:56 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:59.761 16:25:56 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:59.761 [2024-07-24 16:25:56.306398] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:08:59.761 [2024-07-24 16:25:56.306503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539801 ] 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:59.761 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.761 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:59.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.762 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:59.762 [2024-07-24 16:25:56.531473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.020 [2024-07-24 16:25:56.814946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.279 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.576 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:00.577 16:25:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:03.136 16:25:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.136 00:09:03.136 real 0m3.233s 00:09:03.136 user 0m2.924s 00:09:03.136 sys 0m0.309s 00:09:03.136 16:25:59 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.136 16:25:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:03.136 ************************************ 00:09:03.136 END TEST accel_fill 00:09:03.136 ************************************ 00:09:03.136 16:25:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:03.136 16:25:59 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:03.136 16:25:59 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.136 16:25:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.136 ************************************ 00:09:03.136 START TEST accel_copy_crc32c 00:09:03.136 ************************************ 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:03.136 16:25:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:03.136 [2024-07-24 16:25:59.614087] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:03.136 [2024-07-24 16:25:59.614199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540468 ] 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:03.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.136 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:03.136 [2024-07-24 16:25:59.839990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.395 [2024-07-24 16:26:00.135864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.654 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.655 16:26:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:06.189 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:06.190 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:06.190 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:06.190 16:26:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.190 00:09:06.190 real 0m3.287s 00:09:06.190 user 0m2.989s 00:09:06.190 sys 0m0.301s 00:09:06.190 16:26:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.190 16:26:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 ************************************ 00:09:06.190 END TEST accel_copy_crc32c 00:09:06.190 ************************************ 00:09:06.190 16:26:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:06.190 16:26:02 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:06.190 16:26:02 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.190 16:26:02 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.190 ************************************ 00:09:06.190 START TEST accel_copy_crc32c_C2 00:09:06.190 ************************************ 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:06.190 16:26:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:06.190 [2024-07-24 16:26:02.984184] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:06.190 [2024-07-24 16:26:02.984283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541128 ] 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.449 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:06.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:06.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.450 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:06.450 [2024-07-24 16:26:03.211345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.707 [2024-07-24 16:26:03.482385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.966 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.967 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:07.226 16:26:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:09.756 00:09:09.756 real 0m3.201s 00:09:09.756 user 0m2.905s 00:09:09.756 sys 0m0.288s 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.756 16:26:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:09.756 ************************************ 00:09:09.756 END TEST accel_copy_crc32c_C2 00:09:09.756 ************************************ 00:09:09.756 16:26:06 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:09.756 16:26:06 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:09.756 16:26:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.756 16:26:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:09.756 ************************************ 00:09:09.756 START TEST accel_dualcast 00:09:09.756 ************************************ 00:09:09.756 16:26:06 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:09.756 16:26:06 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:09.756 [2024-07-24 16:26:06.266615] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:09.756 [2024-07-24 16:26:06.266715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541674 ] 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:09.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:09.756 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:09.756 [2024-07-24 16:26:06.490873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.015 [2024-07-24 16:26:06.773311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.274 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:10.533 16:26:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:13.067 16:26:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.067 00:09:13.067 real 0m3.261s 00:09:13.067 user 0m2.962s 00:09:13.067 sys 0m0.299s 00:09:13.067 16:26:09 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.067 16:26:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:13.067 ************************************ 00:09:13.067 END TEST accel_dualcast 00:09:13.067 ************************************ 00:09:13.067 16:26:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:13.067 16:26:09 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:13.068 16:26:09 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.068 16:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:09:13.068 ************************************ 00:09:13.068 START TEST accel_compare 00:09:13.068 ************************************ 00:09:13.068 16:26:09 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:13.068 16:26:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:13.068 [2024-07-24 16:26:09.609690] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:13.068 [2024-07-24 16:26:09.609796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542235 ] 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:13.068 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.068 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:13.068 [2024-07-24 16:26:09.837201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.327 [2024-07-24 16:26:10.123772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:13.894 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:13.895 16:26:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:16.429 16:26:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:16.429 00:09:16.429 real 0m3.198s 00:09:16.429 user 0m2.884s 00:09:16.429 sys 0m0.312s 00:09:16.429 16:26:12 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.429 16:26:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:16.429 ************************************ 00:09:16.429 END TEST accel_compare 00:09:16.429 ************************************ 00:09:16.429 16:26:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:16.429 16:26:12 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:16.429 16:26:12 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.429 16:26:12 accel -- common/autotest_common.sh@10 -- # set +x 00:09:16.429 ************************************ 00:09:16.429 START TEST accel_xor 00:09:16.429 ************************************ 00:09:16.429 16:26:12 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:16.429 16:26:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:16.429 [2024-07-24 16:26:12.892974] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:16.429 [2024-07-24 16:26:12.893079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542789 ] 00:09:16.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:16.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.430 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:16.430 [2024-07-24 16:26:13.117242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.689 [2024-07-24 16:26:13.401461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:16.949 16:26:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:19.484 00:09:19.484 real 0m3.242s 00:09:19.484 user 0m2.927s 00:09:19.484 sys 0m0.313s 00:09:19.484 16:26:16 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:19.484 16:26:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:19.484 ************************************ 00:09:19.484 END TEST accel_xor 00:09:19.484 ************************************ 00:09:19.484 16:26:16 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:19.484 16:26:16 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:19.484 16:26:16 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:19.484 16:26:16 accel -- common/autotest_common.sh@10 -- # set +x 00:09:19.484 ************************************ 00:09:19.484 START TEST accel_xor 00:09:19.484 ************************************ 00:09:19.484 16:26:16 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:19.484 16:26:16 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:19.484 [2024-07-24 16:26:16.213672] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:19.484 [2024-07-24 16:26:16.213777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543335 ] 00:09:19.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.743 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:19.743 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:19.744 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:19.744 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:19.744 [2024-07-24 16:26:16.440481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.003 [2024-07-24 16:26:16.723855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:20.262 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:20.263 16:26:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:22.834 16:26:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:22.834 00:09:22.834 real 0m3.205s 00:09:22.834 user 0m2.882s 00:09:22.834 sys 0m0.319s 00:09:22.834 16:26:19 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.834 16:26:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:22.834 ************************************ 00:09:22.834 END TEST accel_xor 00:09:22.834 ************************************ 00:09:22.834 16:26:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:22.835 16:26:19 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:22.835 16:26:19 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.835 16:26:19 accel -- common/autotest_common.sh@10 -- # set +x 00:09:22.835 ************************************ 00:09:22.835 START TEST accel_dif_verify 00:09:22.835 ************************************ 00:09:22.835 16:26:19 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:22.835 16:26:19 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:22.835 [2024-07-24 16:26:19.501634] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:22.835 [2024-07-24 16:26:19.501738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543899 ] 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:22.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:22.835 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:23.094 [2024-07-24 16:26:19.727186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.353 [2024-07-24 16:26:20.026899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.612 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:23.613 16:26:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:26.148 16:26:22 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:26.148 00:09:26.148 real 0m3.239s 00:09:26.148 user 0m2.914s 00:09:26.148 sys 0m0.319s 00:09:26.148 16:26:22 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.148 16:26:22 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:26.148 ************************************ 00:09:26.148 END TEST accel_dif_verify 00:09:26.148 ************************************ 00:09:26.148 16:26:22 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:26.148 16:26:22 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:26.148 16:26:22 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.148 16:26:22 accel -- common/autotest_common.sh@10 -- # set +x 00:09:26.148 ************************************ 00:09:26.148 START TEST accel_dif_generate 00:09:26.148 ************************************ 00:09:26.148 16:26:22 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:26.148 16:26:22 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:26.148 [2024-07-24 16:26:22.817825] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:26.148 [2024-07-24 16:26:22.817928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544538 ] 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.148 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:26.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.149 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:26.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.149 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:26.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.149 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:26.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.149 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:26.149 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.149 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:26.408 [2024-07-24 16:26:23.042017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.666 [2024-07-24 16:26:23.322229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.924 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:26.925 16:26:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:29.460 16:26:25 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.460 00:09:29.460 real 0m3.224s 00:09:29.460 user 0m2.933s 00:09:29.460 sys 0m0.290s 00:09:29.460 16:26:25 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.460 16:26:25 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:29.460 ************************************ 00:09:29.460 END TEST accel_dif_generate 00:09:29.460 ************************************ 00:09:29.460 16:26:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:29.460 16:26:26 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:29.460 16:26:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.460 16:26:26 accel -- common/autotest_common.sh@10 -- # set +x 00:09:29.460 ************************************ 00:09:29.460 START TEST accel_dif_generate_copy 00:09:29.460 ************************************ 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:29.460 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:29.460 [2024-07-24 16:26:26.129375] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:29.460 [2024-07-24 16:26:26.129480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545186 ] 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:29.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.460 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:29.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:29.461 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:29.720 [2024-07-24 16:26:26.357321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.978 [2024-07-24 16:26:26.632160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.236 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:30.237 16:26:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:32.771 00:09:32.771 real 0m3.209s 00:09:32.771 user 0m2.894s 00:09:32.771 sys 0m0.315s 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.771 16:26:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:32.771 ************************************ 00:09:32.771 END TEST accel_dif_generate_copy 00:09:32.771 ************************************ 00:09:32.771 16:26:29 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:32.771 16:26:29 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:32.771 16:26:29 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:32.771 16:26:29 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.771 16:26:29 accel -- common/autotest_common.sh@10 -- # set +x 00:09:32.771 ************************************ 00:09:32.771 START TEST accel_comp 00:09:32.771 ************************************ 00:09:32.771 16:26:29 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:32.771 16:26:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:32.771 [2024-07-24 16:26:29.414068] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:32.771 [2024-07-24 16:26:29.414173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545774 ] 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:32.771 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:32.771 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:33.030 [2024-07-24 16:26:29.633818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.289 [2024-07-24 16:26:29.917489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:33.548 16:26:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:36.083 16:26:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:36.083 00:09:36.083 real 0m3.286s 00:09:36.083 user 0m2.991s 00:09:36.083 sys 0m0.296s 00:09:36.083 16:26:32 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.083 16:26:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:36.083 ************************************ 00:09:36.083 END TEST accel_comp 00:09:36.083 ************************************ 00:09:36.083 16:26:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:36.083 16:26:32 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:36.083 16:26:32 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.083 16:26:32 accel -- common/autotest_common.sh@10 -- # set +x 00:09:36.083 ************************************ 00:09:36.083 START TEST accel_decomp 00:09:36.083 ************************************ 00:09:36.083 16:26:32 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:36.083 16:26:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:36.083 [2024-07-24 16:26:32.779216] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:36.083 [2024-07-24 16:26:32.779316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546321 ] 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:36.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.083 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:36.342 [2024-07-24 16:26:33.001731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.601 [2024-07-24 16:26:33.266846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:36.860 16:26:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:39.396 16:26:35 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.396 00:09:39.396 real 0m3.172s 00:09:39.396 user 0m2.869s 00:09:39.396 sys 0m0.305s 00:09:39.396 16:26:35 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.396 16:26:35 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:39.396 ************************************ 00:09:39.396 END TEST accel_decomp 00:09:39.396 ************************************ 00:09:39.396 16:26:35 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:39.396 16:26:35 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:39.396 16:26:35 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.396 16:26:35 accel -- common/autotest_common.sh@10 -- # set +x 00:09:39.396 ************************************ 00:09:39.396 START TEST accel_decomp_full 00:09:39.396 ************************************ 00:09:39.396 16:26:35 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:09:39.396 16:26:35 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:09:39.396 [2024-07-24 16:26:36.032053] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:39.396 [2024-07-24 16:26:36.032165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546867 ] 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.396 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:39.396 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:39.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.397 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:39.397 [2024-07-24 16:26:36.254225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.965 [2024-07-24 16:26:36.531898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:40.262 16:26:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:42.802 16:26:39 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:42.802 00:09:42.802 real 0m3.241s 00:09:42.802 user 0m2.931s 00:09:42.802 sys 0m0.305s 00:09:42.802 16:26:39 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.802 16:26:39 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:42.802 ************************************ 00:09:42.802 END TEST accel_decomp_full 00:09:42.802 ************************************ 00:09:42.802 16:26:39 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:42.802 16:26:39 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:42.802 16:26:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.802 16:26:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:42.802 ************************************ 00:09:42.802 START TEST accel_decomp_mcore 00:09:42.802 ************************************ 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:42.802 16:26:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:42.802 [2024-07-24 16:26:39.352711] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:42.802 [2024-07-24 16:26:39.352815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547424 ] 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:42.802 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:42.802 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:42.802 [2024-07-24 16:26:39.578857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:43.061 [2024-07-24 16:26:39.875402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.061 [2024-07-24 16:26:39.875474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.061 [2024-07-24 16:26:39.875549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.061 [2024-07-24 16:26:39.875555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:43.628 16:26:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:46.162 00:09:46.162 real 0m3.299s 00:09:46.162 user 0m0.025s 00:09:46.162 sys 0m0.006s 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.162 16:26:42 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 ************************************ 00:09:46.162 END TEST accel_decomp_mcore 00:09:46.162 ************************************ 00:09:46.162 16:26:42 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:46.162 16:26:42 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:46.162 16:26:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.162 16:26:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:46.162 ************************************ 00:09:46.162 START TEST accel_decomp_full_mcore 00:09:46.162 ************************************ 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:46.162 16:26:42 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:46.162 [2024-07-24 16:26:42.736864] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:46.162 [2024-07-24 16:26:42.736969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547976 ] 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.162 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:46.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.163 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:46.163 [2024-07-24 16:26:42.963475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.421 [2024-07-24 16:26:43.238649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.421 [2024-07-24 16:26:43.238720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.421 [2024-07-24 16:26:43.238793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.421 [2024-07-24 16:26:43.238801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.987 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:46.988 16:26:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.516 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:49.517 00:09:49.517 real 0m3.285s 00:09:49.517 user 0m9.415s 00:09:49.517 sys 0m0.326s 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.517 16:26:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:49.517 ************************************ 00:09:49.517 END TEST accel_decomp_full_mcore 00:09:49.517 ************************************ 00:09:49.517 16:26:46 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:49.517 16:26:46 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:49.517 16:26:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.517 16:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:49.517 ************************************ 00:09:49.517 START TEST accel_decomp_mthread 00:09:49.517 ************************************ 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:49.517 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:49.517 [2024-07-24 16:26:46.101633] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:49.517 [2024-07-24 16:26:46.101741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1548564 ] 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:49.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.517 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:49.517 [2024-07-24 16:26:46.327728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.776 [2024-07-24 16:26:46.611972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:50.344 16:26:46 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.876 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.876 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.876 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.876 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.876 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:52.877 00:09:52.877 real 0m3.231s 00:09:52.877 user 0m2.926s 00:09:52.877 sys 0m0.302s 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.877 16:26:49 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:52.877 ************************************ 00:09:52.877 END TEST accel_decomp_mthread 00:09:52.877 ************************************ 00:09:52.877 16:26:49 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:52.877 16:26:49 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:52.877 16:26:49 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.877 16:26:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:52.877 ************************************ 00:09:52.877 START TEST accel_decomp_full_mthread 00:09:52.877 ************************************ 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:52.877 16:26:49 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:52.877 [2024-07-24 16:26:49.416712] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:52.877 [2024-07-24 16:26:49.416822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549200 ] 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:52.877 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:52.877 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:52.877 [2024-07-24 16:26:49.639940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.136 [2024-07-24 16:26:49.897260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.396 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:53.655 16:26:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.189 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:56.190 00:09:56.190 real 0m3.192s 00:09:56.190 user 0m2.865s 00:09:56.190 sys 0m0.323s 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.190 16:26:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:56.190 ************************************ 00:09:56.190 END TEST accel_decomp_full_mthread 00:09:56.190 ************************************ 00:09:56.190 16:26:52 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:09:56.190 16:26:52 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:09:56.190 16:26:52 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:09:56.190 16:26:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:56.190 16:26:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1549795 00:09:56.190 16:26:52 accel -- accel/accel.sh@63 -- # waitforlisten 1549795 00:09:56.190 16:26:52 accel -- common/autotest_common.sh@831 -- # '[' -z 1549795 ']' 00:09:56.190 16:26:52 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.190 16:26:52 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.190 16:26:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:56.190 16:26:52 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.190 16:26:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:56.190 16:26:52 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.190 16:26:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.190 16:26:52 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.190 16:26:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.190 16:26:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.190 16:26:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.190 16:26:52 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:56.190 16:26:52 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:56.190 16:26:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:56.190 16:26:52 accel -- accel/accel.sh@41 -- # jq -r . 00:09:56.190 [2024-07-24 16:26:52.711815] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:09:56.190 [2024-07-24 16:26:52.711935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1549795 ] 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:56.190 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.190 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:56.190 [2024-07-24 16:26:52.937514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.450 [2024-07-24 16:26:53.223966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.825 [2024-07-24 16:26:54.636092] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:58.762 16:26:55 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:58.762 16:26:55 accel -- common/autotest_common.sh@864 -- # return 0 00:09:58.762 16:26:55 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:58.762 16:26:55 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:58.762 16:26:55 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:58.762 16:26:55 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:09:58.762 16:26:55 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:09:58.762 16:26:55 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:09:58.762 16:26:55 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:09:58.762 16:26:55 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.762 16:26:55 accel -- common/autotest_common.sh@10 -- # set +x 00:09:58.762 16:26:55 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.022 "method": "compressdev_scan_accel_module", 00:09:59.022 16:26:55 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:59.022 16:26:55 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:59.022 16:26:55 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # IFS== 00:09:59.022 16:26:55 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:59.022 16:26:55 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:59.022 16:26:55 accel -- accel/accel.sh@75 -- # killprocess 1549795 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@950 -- # '[' -z 1549795 ']' 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@954 -- # kill -0 1549795 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@955 -- # uname 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549795 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549795' 00:09:59.022 killing process with pid 1549795 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@969 -- # kill 1549795 00:09:59.022 16:26:55 accel -- common/autotest_common.sh@974 -- # wait 1549795 00:10:02.342 16:26:58 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:02.342 16:26:58 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:02.342 16:26:58 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:02.342 16:26:58 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.342 16:26:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 ************************************ 00:10:02.342 START TEST accel_cdev_comp 00:10:02.342 ************************************ 00:10:02.342 16:26:58 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:02.342 16:26:58 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:10:02.342 [2024-07-24 16:26:58.624850] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:02.342 [2024-07-24 16:26:58.624951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550728 ] 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:02.342 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.342 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:02.342 [2024-07-24 16:26:58.850693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.342 [2024-07-24 16:26:59.133442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.721 [2024-07-24 16:27:00.546613] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:03.721 [2024-07-24 16:27:00.549659] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 [2024-07-24 16:27:00.557862] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:03.721 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:03.722 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:03.722 16:27:00 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:05.626 16:27:02 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:05.626 00:10:05.626 real 0m3.749s 00:10:05.626 user 0m3.071s 00:10:05.626 sys 0m0.675s 00:10:05.626 16:27:02 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.626 16:27:02 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:10:05.626 ************************************ 00:10:05.626 END TEST accel_cdev_comp 00:10:05.626 ************************************ 00:10:05.626 16:27:02 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.626 16:27:02 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:05.626 16:27:02 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.626 16:27:02 accel -- common/autotest_common.sh@10 -- # set +x 00:10:05.626 ************************************ 00:10:05.626 START TEST accel_cdev_decomp 00:10:05.626 ************************************ 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.626 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:05.627 16:27:02 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:05.627 [2024-07-24 16:27:02.458793] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:05.627 [2024-07-24 16:27:02.458897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1551610 ] 00:10:05.886 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:05.887 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:05.887 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:05.887 [2024-07-24 16:27:02.688145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.146 [2024-07-24 16:27:02.975433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.524 [2024-07-24 16:27:04.377540] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:07.524 [2024-07-24 16:27:04.380570] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:07.524 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.524 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.525 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.784 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 [2024-07-24 16:27:04.388860] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:07.785 16:27:04 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:09.692 00:10:09.692 real 0m3.769s 00:10:09.692 user 0m3.097s 00:10:09.692 sys 0m0.668s 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.692 16:27:06 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:09.692 ************************************ 00:10:09.692 END TEST accel_cdev_decomp 00:10:09.692 ************************************ 00:10:09.692 16:27:06 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:09.692 16:27:06 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:09.692 16:27:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.692 16:27:06 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.692 ************************************ 00:10:09.692 START TEST accel_cdev_decomp_full 00:10:09.692 ************************************ 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:09.692 16:27:06 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:09.692 [2024-07-24 16:27:06.307853] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:09.692 [2024-07-24 16:27:06.307954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552179 ] 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.693 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:09.693 [2024-07-24 16:27:06.530544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.952 [2024-07-24 16:27:06.813298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.858 [2024-07-24 16:27:08.204625] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:11.858 [2024-07-24 16:27:08.207666] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 [2024-07-24 16:27:08.215036] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:11.858 16:27:08 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:13.237 00:10:13.237 real 0m3.707s 00:10:13.237 user 0m3.032s 00:10:13.237 sys 0m0.672s 00:10:13.237 16:27:09 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.238 16:27:09 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:13.238 ************************************ 00:10:13.238 END TEST accel_cdev_decomp_full 00:10:13.238 ************************************ 00:10:13.238 16:27:09 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:13.238 16:27:09 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:13.238 16:27:09 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.238 16:27:09 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.238 ************************************ 00:10:13.238 START TEST accel_cdev_decomp_mcore 00:10:13.238 ************************************ 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:13.238 16:27:10 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:13.238 [2024-07-24 16:27:10.094303] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:13.238 [2024-07-24 16:27:10.094406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553383 ] 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:13.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:13.498 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:13.498 [2024-07-24 16:27:10.319016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.757 [2024-07-24 16:27:10.605505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.757 [2024-07-24 16:27:10.605576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.757 [2024-07-24 16:27:10.606422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.757 [2024-07-24 16:27:10.606428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.136 [2024-07-24 16:27:11.993923] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:15.136 [2024-07-24 16:27:11.997083] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:10:15.396 16:27:11 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 [2024-07-24 16:27:12.006935] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:15.396 [2024-07-24 16:27:12.008860] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 [2024-07-24 16:27:12.012954] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 [2024-07-24 16:27:12.013126] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:15.396 16:27:12 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:17.302 00:10:17.302 real 0m3.921s 00:10:17.302 user 0m11.478s 00:10:17.302 sys 0m0.706s 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.302 16:27:13 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:17.302 ************************************ 00:10:17.302 END TEST accel_cdev_decomp_mcore 00:10:17.302 ************************************ 00:10:17.302 16:27:13 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:17.302 16:27:13 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:17.302 16:27:13 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.302 16:27:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:17.302 ************************************ 00:10:17.302 START TEST accel_cdev_decomp_full_mcore 00:10:17.302 ************************************ 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:17.302 16:27:14 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:17.302 [2024-07-24 16:27:14.101235] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:17.302 [2024-07-24 16:27:14.101341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1553975 ] 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:17.562 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.562 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:17.562 [2024-07-24 16:27:14.328541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.822 [2024-07-24 16:27:14.628720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.822 [2024-07-24 16:27:14.628794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.822 [2024-07-24 16:27:14.628859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.822 [2024-07-24 16:27:14.628868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.238 [2024-07-24 16:27:16.000544] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:19.238 [2024-07-24 16:27:16.003688] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.238 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 [2024-07-24 16:27:16.013574] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 [2024-07-24 16:27:16.015667] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 [2024-07-24 16:27:16.019730] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:19.239 [2024-07-24 16:27:16.019963] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:19.239 16:27:16 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.143 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:21.144 00:10:21.144 real 0m3.941s 00:10:21.144 user 0m11.549s 00:10:21.144 sys 0m0.702s 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.144 16:27:17 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:21.144 ************************************ 00:10:21.144 END TEST accel_cdev_decomp_full_mcore 00:10:21.144 ************************************ 00:10:21.404 16:27:18 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:21.404 16:27:18 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:21.404 16:27:18 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.404 16:27:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:21.404 ************************************ 00:10:21.404 START TEST accel_cdev_decomp_mthread 00:10:21.404 ************************************ 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:21.404 16:27:18 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:21.404 [2024-07-24 16:27:18.105975] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:21.404 [2024-07-24 16:27:18.106074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554741 ] 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:21.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:21.404 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:21.664 [2024-07-24 16:27:18.331003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.923 [2024-07-24 16:27:18.617936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.305 [2024-07-24 16:27:20.028779] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:23.305 [2024-07-24 16:27:20.031816] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:23.305 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.305 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.305 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.305 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:23.306 [2024-07-24 16:27:20.042624] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 [2024-07-24 16:27:20.046848] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:23.306 16:27:20 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:25.213 00:10:25.213 real 0m3.729s 00:10:25.213 user 0m3.069s 00:10:25.213 sys 0m0.661s 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.213 16:27:21 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:25.213 ************************************ 00:10:25.213 END TEST accel_cdev_decomp_mthread 00:10:25.213 ************************************ 00:10:25.213 16:27:21 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:25.213 16:27:21 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:25.213 16:27:21 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.213 16:27:21 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.213 ************************************ 00:10:25.213 START TEST accel_cdev_decomp_full_mthread 00:10:25.213 ************************************ 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:25.213 16:27:21 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:25.213 [2024-07-24 16:27:21.929343] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:25.213 [2024-07-24 16:27:21.929448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1555342 ] 00:10:25.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:25.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:25.214 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:25.472 [2024-07-24 16:27:22.156662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.730 [2024-07-24 16:27:22.438987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.104 [2024-07-24 16:27:23.821656] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:27.104 [2024-07-24 16:27:23.824708] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 [2024-07-24 16:27:23.834207] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.104 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 [2024-07-24 16:27:23.842700] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:27.105 16:27:23 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:29.008 00:10:29.008 real 0m3.749s 00:10:29.008 user 0m3.075s 00:10:29.008 sys 0m0.674s 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.008 16:27:25 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:29.008 ************************************ 00:10:29.008 END TEST accel_cdev_decomp_full_mthread 00:10:29.008 ************************************ 00:10:29.008 16:27:25 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:10:29.008 16:27:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:29.008 16:27:25 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:29.008 16:27:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:29.008 16:27:25 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.008 16:27:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:29.008 16:27:25 accel -- common/autotest_common.sh@10 -- # set +x 00:10:29.008 16:27:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:29.008 16:27:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.008 16:27:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.008 16:27:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:29.008 16:27:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:29.008 16:27:25 accel -- accel/accel.sh@41 -- # jq -r . 00:10:29.008 ************************************ 00:10:29.008 START TEST accel_dif_functional_tests 00:10:29.008 ************************************ 00:10:29.008 16:27:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:29.008 [2024-07-24 16:27:25.789163] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:29.009 [2024-07-24 16:27:25.789239] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556099 ] 00:10:29.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.267 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:29.267 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.267 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:29.268 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:29.268 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:29.268 [2024-07-24 16:27:25.984192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.527 [2024-07-24 16:27:26.253923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.527 [2024-07-24 16:27:26.253992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.527 [2024-07-24 16:27:26.253995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.095 00:10:30.095 00:10:30.096 CUnit - A unit testing framework for C - Version 2.1-3 00:10:30.096 http://cunit.sourceforge.net/ 00:10:30.096 00:10:30.096 00:10:30.096 Suite: accel_dif 00:10:30.096 Test: verify: DIF generated, GUARD check ...passed 00:10:30.096 Test: verify: DIF generated, APPTAG check ...passed 00:10:30.096 Test: verify: DIF generated, REFTAG check ...passed 00:10:30.096 Test: verify: DIF not generated, GUARD check ...[2024-07-24 16:27:26.767027] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:30.096 passed 00:10:30.096 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 16:27:26.767119] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:30.096 passed 00:10:30.096 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 16:27:26.767178] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:30.096 passed 00:10:30.096 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:30.096 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 16:27:26.767279] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:30.096 passed 00:10:30.096 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:30.096 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:30.096 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:30.096 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 16:27:26.767497] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:30.096 passed 00:10:30.096 Test: verify copy: DIF generated, GUARD check ...passed 00:10:30.096 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:30.096 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:30.096 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 16:27:26.767732] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:30.096 passed 00:10:30.096 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 16:27:26.767791] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:30.096 passed 00:10:30.096 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 16:27:26.767850] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:30.096 passed 00:10:30.096 Test: generate copy: DIF generated, GUARD check ...passed 00:10:30.096 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:30.096 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:30.096 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:30.096 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:30.096 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:30.096 Test: generate copy: iovecs-len validate ...[2024-07-24 16:27:26.768265] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:30.096 passed 00:10:30.096 Test: generate copy: buffer alignment validate ...passed 00:10:30.096 00:10:30.096 Run Summary: Type Total Ran Passed Failed Inactive 00:10:30.096 suites 1 1 n/a 0 0 00:10:30.096 tests 26 26 26 0 0 00:10:30.096 asserts 115 115 115 0 n/a 00:10:30.096 00:10:30.096 Elapsed time = 0.005 seconds 00:10:32.002 00:10:32.002 real 0m2.750s 00:10:32.002 user 0m5.623s 00:10:32.002 sys 0m0.327s 00:10:32.002 16:27:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.002 16:27:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:32.002 ************************************ 00:10:32.002 END TEST accel_dif_functional_tests 00:10:32.002 ************************************ 00:10:32.002 00:10:32.002 real 1m52.574s 00:10:32.002 user 2m11.238s 00:10:32.002 sys 0m15.734s 00:10:32.002 16:27:28 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.002 16:27:28 accel -- common/autotest_common.sh@10 -- # set +x 00:10:32.002 ************************************ 00:10:32.002 END TEST accel 00:10:32.002 ************************************ 00:10:32.002 16:27:28 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:32.002 16:27:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:32.002 16:27:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.002 16:27:28 -- common/autotest_common.sh@10 -- # set +x 00:10:32.002 ************************************ 00:10:32.002 START TEST accel_rpc 00:10:32.002 ************************************ 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:32.002 * Looking for test storage... 00:10:32.002 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:32.002 16:27:28 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:32.002 16:27:28 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1556685 00:10:32.002 16:27:28 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1556685 00:10:32.002 16:27:28 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 1556685 ']' 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.002 16:27:28 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:32.002 [2024-07-24 16:27:28.794133] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:32.002 [2024-07-24 16:27:28.794262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556685 ] 00:10:32.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.261 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:32.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:32.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:32.262 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:32.262 [2024-07-24 16:27:29.018025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.521 [2024-07-24 16:27:29.299382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.780 16:27:29 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.780 16:27:29 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:32.780 16:27:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:32.780 16:27:29 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:32.780 16:27:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:32.780 16:27:29 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:32.780 16:27:29 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:32.780 16:27:29 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:32.780 16:27:29 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.780 16:27:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.039 ************************************ 00:10:33.039 START TEST accel_assign_opcode 00:10:33.039 ************************************ 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:33.039 [2024-07-24 16:27:29.669162] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:33.039 [2024-07-24 16:27:29.677124] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.039 16:27:29 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:34.417 16:27:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.417 software 00:10:34.417 00:10:34.417 real 0m1.241s 00:10:34.418 user 0m0.048s 00:10:34.418 sys 0m0.013s 00:10:34.418 16:27:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.418 16:27:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:34.418 ************************************ 00:10:34.418 END TEST accel_assign_opcode 00:10:34.418 ************************************ 00:10:34.418 16:27:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1556685 00:10:34.418 16:27:30 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 1556685 ']' 00:10:34.418 16:27:30 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 1556685 00:10:34.418 16:27:30 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:10:34.418 16:27:30 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.418 16:27:30 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1556685 00:10:34.418 16:27:31 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.418 16:27:31 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.418 16:27:31 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1556685' 00:10:34.418 killing process with pid 1556685 00:10:34.418 16:27:31 accel_rpc -- common/autotest_common.sh@969 -- # kill 1556685 00:10:34.418 16:27:31 accel_rpc -- common/autotest_common.sh@974 -- # wait 1556685 00:10:37.739 00:10:37.739 real 0m5.624s 00:10:37.739 user 0m5.472s 00:10:37.739 sys 0m0.763s 00:10:37.739 16:27:34 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:37.739 16:27:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 ************************************ 00:10:37.739 END TEST accel_rpc 00:10:37.739 ************************************ 00:10:37.739 16:27:34 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:37.739 16:27:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:37.739 16:27:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:37.739 16:27:34 -- common/autotest_common.sh@10 -- # set +x 00:10:37.739 ************************************ 00:10:37.739 START TEST app_cmdline 00:10:37.739 ************************************ 00:10:37.739 16:27:34 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:37.739 * Looking for test storage... 00:10:37.739 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:37.740 16:27:34 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:37.740 16:27:34 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1557603 00:10:37.740 16:27:34 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1557603 00:10:37.740 16:27:34 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:37.740 16:27:34 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1557603 ']' 00:10:37.740 16:27:34 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.740 16:27:34 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.740 16:27:34 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.740 16:27:34 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.740 16:27:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:37.740 [2024-07-24 16:27:34.520274] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:37.740 [2024-07-24 16:27:34.520401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1557603 ] 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:38.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:38.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:38.010 [2024-07-24 16:27:34.739052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.269 [2024-07-24 16:27:35.027947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.649 16:27:36 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.649 16:27:36 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:39.649 16:27:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:39.649 { 00:10:39.649 "version": "SPDK v24.09-pre git sha1 8ee2672c4", 00:10:39.649 "fields": { 00:10:39.649 "major": 24, 00:10:39.649 "minor": 9, 00:10:39.649 "patch": 0, 00:10:39.649 "suffix": "-pre", 00:10:39.649 "commit": "8ee2672c4" 00:10:39.649 } 00:10:39.649 } 00:10:39.649 16:27:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:39.649 16:27:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:39.649 16:27:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:39.909 16:27:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:39.909 16:27:36 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:40.168 request: 00:10:40.168 { 00:10:40.168 "method": "env_dpdk_get_mem_stats", 00:10:40.168 "req_id": 1 00:10:40.168 } 00:10:40.168 Got JSON-RPC error response 00:10:40.168 response: 00:10:40.168 { 00:10:40.168 "code": -32601, 00:10:40.168 "message": "Method not found" 00:10:40.168 } 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:40.168 16:27:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1557603 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1557603 ']' 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1557603 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1557603 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1557603' 00:10:40.168 killing process with pid 1557603 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 1557603 00:10:40.168 16:27:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 1557603 00:10:43.458 00:10:43.458 real 0m5.826s 00:10:43.458 user 0m6.001s 00:10:43.458 sys 0m0.791s 00:10:43.458 16:27:40 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.458 16:27:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:43.458 ************************************ 00:10:43.458 END TEST app_cmdline 00:10:43.458 ************************************ 00:10:43.458 16:27:40 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:10:43.458 16:27:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:43.458 16:27:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.458 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:10:43.458 ************************************ 00:10:43.458 START TEST version 00:10:43.458 ************************************ 00:10:43.458 16:27:40 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:10:43.458 * Looking for test storage... 00:10:43.458 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:43.458 16:27:40 version -- app/version.sh@17 -- # get_header_version major 00:10:43.458 16:27:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:43.459 16:27:40 version -- app/version.sh@14 -- # cut -f2 00:10:43.459 16:27:40 version -- app/version.sh@14 -- # tr -d '"' 00:10:43.459 16:27:40 version -- app/version.sh@17 -- # major=24 00:10:43.459 16:27:40 version -- app/version.sh@18 -- # get_header_version minor 00:10:43.459 16:27:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:43.459 16:27:40 version -- app/version.sh@14 -- # cut -f2 00:10:43.459 16:27:40 version -- app/version.sh@14 -- # tr -d '"' 00:10:43.459 16:27:40 version -- app/version.sh@18 -- # minor=9 00:10:43.459 16:27:40 version -- app/version.sh@19 -- # get_header_version patch 00:10:43.459 16:27:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:43.459 16:27:40 version -- app/version.sh@14 -- # cut -f2 00:10:43.459 16:27:40 version -- app/version.sh@14 -- # tr -d '"' 00:10:43.718 16:27:40 version -- app/version.sh@19 -- # patch=0 00:10:43.718 16:27:40 version -- app/version.sh@20 -- # get_header_version suffix 00:10:43.718 16:27:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:43.718 16:27:40 version -- app/version.sh@14 -- # cut -f2 00:10:43.718 16:27:40 version -- app/version.sh@14 -- # tr -d '"' 00:10:43.718 16:27:40 version -- app/version.sh@20 -- # suffix=-pre 00:10:43.718 16:27:40 version -- app/version.sh@22 -- # version=24.9 00:10:43.718 16:27:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:43.718 16:27:40 version -- app/version.sh@28 -- # version=24.9rc0 00:10:43.718 16:27:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:10:43.718 16:27:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:43.718 16:27:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:43.718 16:27:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:43.718 00:10:43.718 real 0m0.185s 00:10:43.718 user 0m0.090s 00:10:43.718 sys 0m0.138s 00:10:43.718 16:27:40 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.718 16:27:40 version -- common/autotest_common.sh@10 -- # set +x 00:10:43.718 ************************************ 00:10:43.718 END TEST version 00:10:43.718 ************************************ 00:10:43.718 16:27:40 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:10:43.718 16:27:40 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:10:43.718 16:27:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:43.718 16:27:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.718 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:10:43.718 ************************************ 00:10:43.718 START TEST blockdev_general 00:10:43.718 ************************************ 00:10:43.718 16:27:40 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:10:43.718 * Looking for test storage... 00:10:43.718 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:43.718 16:27:40 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:10:43.718 16:27:40 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1558744 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 1558744 00:10:43.978 16:27:40 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 1558744 ']' 00:10:43.978 16:27:40 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:10:43.978 16:27:40 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.978 16:27:40 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.979 16:27:40 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.979 16:27:40 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.979 16:27:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:43.979 [2024-07-24 16:27:40.699724] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:43.979 [2024-07-24 16:27:40.699842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558744 ] 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:43.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:43.979 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:44.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:44.238 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:44.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:44.238 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:44.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:44.238 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:44.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:44.238 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:44.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:44.238 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:44.238 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:44.238 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:44.238 [2024-07-24 16:27:40.924939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.498 [2024-07-24 16:27:41.213308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.757 16:27:41 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.757 16:27:41 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:10:44.757 16:27:41 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:44.757 16:27:41 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:10:44.757 16:27:41 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:10:44.757 16:27:41 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.757 16:27:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.134 [2024-07-24 16:27:42.743754] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:46.134 [2024-07-24 16:27:42.743822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:46.134 00:10:46.134 [2024-07-24 16:27:42.751722] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:46.134 [2024-07-24 16:27:42.751767] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:46.134 00:10:46.134 Malloc0 00:10:46.134 Malloc1 00:10:46.134 Malloc2 00:10:46.134 Malloc3 00:10:46.393 Malloc4 00:10:46.393 Malloc5 00:10:46.393 Malloc6 00:10:46.393 Malloc7 00:10:46.653 Malloc8 00:10:46.653 Malloc9 00:10:46.653 [2024-07-24 16:27:43.326610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:46.653 [2024-07-24 16:27:43.326676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.653 [2024-07-24 16:27:43.326706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:10:46.653 [2024-07-24 16:27:43.326726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.653 [2024-07-24 16:27:43.329479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.653 [2024-07-24 16:27:43.329512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:46.653 TestPT 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.653 16:27:43 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:10:46.653 5000+0 records in 00:10:46.653 5000+0 records out 00:10:46.653 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0111784 s, 916 MB/s 00:10:46.653 16:27:43 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.653 AIO0 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.653 16:27:43 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.653 16:27:43 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:10:46.653 16:27:43 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.653 16:27:43 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.653 16:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.914 16:27:43 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.914 16:27:43 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:46.914 16:27:43 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.914 16:27:43 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:46.914 16:27:43 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.914 16:27:43 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:46.914 16:27:43 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:46.915 16:27:43 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "8065f432-2179-4004-b124-80c78e6ebb5c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8065f432-2179-4004-b124-80c78e6ebb5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8db9d1e3-1334-5c4d-8d55-dbb274b1f5c6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8db9d1e3-1334-5c4d-8d55-dbb274b1f5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2603ddc3-eb4c-515c-bd9b-12b15b35dc31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2603ddc3-eb4c-515c-bd9b-12b15b35dc31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "0773fd86-7eae-58d0-b046-280cf5787b29"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0773fd86-7eae-58d0-b046-280cf5787b29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d55510ff-ef96-5be7-914b-361534477547"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d55510ff-ef96-5be7-914b-361534477547",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6b16a204-989d-58e6-97d1-2150a62c42af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b16a204-989d-58e6-97d1-2150a62c42af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "38db5ee5-bf8e-586d-8281-1a9eccc1f11a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38db5ee5-bf8e-586d-8281-1a9eccc1f11a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a13a9fe7-04a6-51f0-9a7c-963ba5612a05"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a13a9fe7-04a6-51f0-9a7c-963ba5612a05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "121c8ae5-a756-5a7a-a5d9-8f275c708cc2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "121c8ae5-a756-5a7a-a5d9-8f275c708cc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "07c66f84-6e10-51ff-a593-098911dfb2f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07c66f84-6e10-51ff-a593-098911dfb2f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "34d79038-6b89-5da5-a67c-095fddef7bef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "34d79038-6b89-5da5-a67c-095fddef7bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f8e3e86d-60ad-5dd2-9f56-54c78ae66ccc"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f8e3e86d-60ad-5dd2-9f56-54c78ae66ccc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d9d16e54-0aed-4930-bdb4-7e2861ebfe67"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d9d16e54-0aed-4930-bdb4-7e2861ebfe67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d9d16e54-0aed-4930-bdb4-7e2861ebfe67",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2a002aaa-bb1a-4abf-b680-6fd14eb5fb44",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "a9d71a2a-6659-4da1-a825-1751cfdd827d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "d50051ac-5410-45d7-b24d-293d9fcfa556",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a4397356-45e0-4209-b21d-faf19e8d7f39",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c6e56614-5c68-48db-824e-b627920f64c1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c6e56614-5c68-48db-824e-b627920f64c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c6e56614-5c68-48db-824e-b627920f64c1",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "ffc64c05-2cb6-4f32-993d-cb82a31f1030",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "766362e4-544a-4760-81b0-42a2f2b52549",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "84d0b0a2-94a4-4e5b-b59e-091b598d8ae0"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "84d0b0a2-94a4-4e5b-b59e-091b598d8ae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:46.915 16:27:43 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:46.915 16:27:43 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:10:46.915 16:27:43 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:46.915 16:27:43 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 1558744 00:10:46.915 16:27:43 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 1558744 ']' 00:10:46.915 16:27:43 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 1558744 00:10:46.915 16:27:43 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:10:46.915 16:27:43 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.915 16:27:43 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1558744 00:10:47.174 16:27:43 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.174 16:27:43 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.174 16:27:43 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1558744' 00:10:47.174 killing process with pid 1558744 00:10:47.174 16:27:43 blockdev_general -- common/autotest_common.sh@969 -- # kill 1558744 00:10:47.174 16:27:43 blockdev_general -- common/autotest_common.sh@974 -- # wait 1558744 00:10:52.448 16:27:48 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:52.449 16:27:48 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:52.449 16:27:48 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:52.449 16:27:48 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.449 16:27:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:52.449 ************************************ 00:10:52.449 START TEST bdev_hello_world 00:10:52.449 ************************************ 00:10:52.449 16:27:48 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:10:52.449 [2024-07-24 16:27:48.517952] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:52.449 [2024-07-24 16:27:48.518066] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560091 ] 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:52.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:52.449 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:52.449 [2024-07-24 16:27:48.739281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.449 [2024-07-24 16:27:49.020261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.016 [2024-07-24 16:27:49.596325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:53.016 [2024-07-24 16:27:49.596391] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:53.016 [2024-07-24 16:27:49.596414] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:53.016 [2024-07-24 16:27:49.604301] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:53.016 [2024-07-24 16:27:49.604343] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:53.016 [2024-07-24 16:27:49.612310] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:53.016 [2024-07-24 16:27:49.612349] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:53.016 [2024-07-24 16:27:49.856832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:53.016 [2024-07-24 16:27:49.856897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.016 [2024-07-24 16:27:49.856919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:10:53.016 [2024-07-24 16:27:49.856934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.016 [2024-07-24 16:27:49.859641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.016 [2024-07-24 16:27:49.859676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:53.585 [2024-07-24 16:27:50.293217] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:53.585 [2024-07-24 16:27:50.293306] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:10:53.585 [2024-07-24 16:27:50.293382] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:53.585 [2024-07-24 16:27:50.293487] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:53.585 [2024-07-24 16:27:50.293596] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:53.585 [2024-07-24 16:27:50.293638] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:53.585 [2024-07-24 16:27:50.293736] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:53.585 00:10:53.585 [2024-07-24 16:27:50.293794] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:56.874 00:10:56.874 real 0m4.901s 00:10:56.874 user 0m4.361s 00:10:56.874 sys 0m0.469s 00:10:56.874 16:27:53 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.874 16:27:53 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 ************************************ 00:10:56.874 END TEST bdev_hello_world 00:10:56.874 ************************************ 00:10:56.874 16:27:53 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:10:56.874 16:27:53 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.874 16:27:53 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.874 16:27:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 ************************************ 00:10:56.874 START TEST bdev_bounds 00:10:56.874 ************************************ 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1561067 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1561067' 00:10:56.874 Process bdevio pid: 1561067 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1561067 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1561067 ']' 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:10:56.874 16:27:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:56.874 [2024-07-24 16:27:53.608663] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:10:56.874 [2024-07-24 16:27:53.608919] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1561067 ] 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:57.133 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:57.133 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:57.133 [2024-07-24 16:27:53.980756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.702 [2024-07-24 16:27:54.270753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.702 [2024-07-24 16:27:54.270834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.702 [2024-07-24 16:27:54.270835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.323 [2024-07-24 16:27:54.828505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:58.323 [2024-07-24 16:27:54.828580] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:58.323 [2024-07-24 16:27:54.828599] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:58.323 [2024-07-24 16:27:54.836513] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:58.323 [2024-07-24 16:27:54.836556] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:58.323 [2024-07-24 16:27:54.844521] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:58.323 [2024-07-24 16:27:54.844558] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:58.323 [2024-07-24 16:27:55.078503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:58.323 [2024-07-24 16:27:55.078565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.323 [2024-07-24 16:27:55.078587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:10:58.323 [2024-07-24 16:27:55.078602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.323 [2024-07-24 16:27:55.081435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.323 [2024-07-24 16:27:55.081469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:59.261 16:27:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.261 16:27:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:10:59.261 16:27:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:59.521 I/O targets: 00:10:59.521 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:10:59.521 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:10:59.521 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:10:59.521 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:10:59.521 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:10:59.521 raid0: 131072 blocks of 512 bytes (64 MiB) 00:10:59.521 concat0: 131072 blocks of 512 bytes (64 MiB) 00:10:59.521 raid1: 65536 blocks of 512 bytes (32 MiB) 00:10:59.521 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:10:59.521 00:10:59.521 00:10:59.521 CUnit - A unit testing framework for C - Version 2.1-3 00:10:59.521 http://cunit.sourceforge.net/ 00:10:59.521 00:10:59.521 00:10:59.521 Suite: bdevio tests on: AIO0 00:10:59.521 Test: blockdev write read block ...passed 00:10:59.521 Test: blockdev write zeroes read block ...passed 00:10:59.521 Test: blockdev write zeroes read no split ...passed 00:10:59.521 Test: blockdev write zeroes read split ...passed 00:10:59.521 Test: blockdev write zeroes read split partial ...passed 00:10:59.521 Test: blockdev reset ...passed 00:10:59.521 Test: blockdev write read 8 blocks ...passed 00:10:59.521 Test: blockdev write read size > 128k ...passed 00:10:59.521 Test: blockdev write read invalid size ...passed 00:10:59.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:59.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:59.521 Test: blockdev write read max offset ...passed 00:10:59.521 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:59.521 Test: blockdev writev readv 8 blocks ...passed 00:10:59.521 Test: blockdev writev readv 30 x 1block ...passed 00:10:59.521 Test: blockdev writev readv block ...passed 00:10:59.521 Test: blockdev writev readv size > 128k ...passed 00:10:59.521 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:59.521 Test: blockdev comparev and writev ...passed 00:10:59.521 Test: blockdev nvme passthru rw ...passed 00:10:59.521 Test: blockdev nvme passthru vendor specific ...passed 00:10:59.521 Test: blockdev nvme admin passthru ...passed 00:10:59.521 Test: blockdev copy ...passed 00:10:59.521 Suite: bdevio tests on: raid1 00:10:59.521 Test: blockdev write read block ...passed 00:10:59.521 Test: blockdev write zeroes read block ...passed 00:10:59.521 Test: blockdev write zeroes read no split ...passed 00:10:59.521 Test: blockdev write zeroes read split ...passed 00:10:59.521 Test: blockdev write zeroes read split partial ...passed 00:10:59.521 Test: blockdev reset ...passed 00:10:59.521 Test: blockdev write read 8 blocks ...passed 00:10:59.521 Test: blockdev write read size > 128k ...passed 00:10:59.521 Test: blockdev write read invalid size ...passed 00:10:59.521 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:59.521 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:59.521 Test: blockdev write read max offset ...passed 00:10:59.521 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:59.521 Test: blockdev writev readv 8 blocks ...passed 00:10:59.521 Test: blockdev writev readv 30 x 1block ...passed 00:10:59.521 Test: blockdev writev readv block ...passed 00:10:59.521 Test: blockdev writev readv size > 128k ...passed 00:10:59.521 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:59.521 Test: blockdev comparev and writev ...passed 00:10:59.521 Test: blockdev nvme passthru rw ...passed 00:10:59.521 Test: blockdev nvme passthru vendor specific ...passed 00:10:59.521 Test: blockdev nvme admin passthru ...passed 00:10:59.521 Test: blockdev copy ...passed 00:10:59.521 Suite: bdevio tests on: concat0 00:10:59.521 Test: blockdev write read block ...passed 00:10:59.521 Test: blockdev write zeroes read block ...passed 00:10:59.521 Test: blockdev write zeroes read no split ...passed 00:10:59.781 Test: blockdev write zeroes read split ...passed 00:10:59.781 Test: blockdev write zeroes read split partial ...passed 00:10:59.781 Test: blockdev reset ...passed 00:10:59.781 Test: blockdev write read 8 blocks ...passed 00:10:59.781 Test: blockdev write read size > 128k ...passed 00:10:59.781 Test: blockdev write read invalid size ...passed 00:10:59.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:59.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:59.781 Test: blockdev write read max offset ...passed 00:10:59.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:59.781 Test: blockdev writev readv 8 blocks ...passed 00:10:59.781 Test: blockdev writev readv 30 x 1block ...passed 00:10:59.781 Test: blockdev writev readv block ...passed 00:10:59.781 Test: blockdev writev readv size > 128k ...passed 00:10:59.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:59.781 Test: blockdev comparev and writev ...passed 00:10:59.781 Test: blockdev nvme passthru rw ...passed 00:10:59.781 Test: blockdev nvme passthru vendor specific ...passed 00:10:59.781 Test: blockdev nvme admin passthru ...passed 00:10:59.781 Test: blockdev copy ...passed 00:10:59.781 Suite: bdevio tests on: raid0 00:10:59.781 Test: blockdev write read block ...passed 00:10:59.781 Test: blockdev write zeroes read block ...passed 00:10:59.781 Test: blockdev write zeroes read no split ...passed 00:10:59.781 Test: blockdev write zeroes read split ...passed 00:10:59.781 Test: blockdev write zeroes read split partial ...passed 00:10:59.781 Test: blockdev reset ...passed 00:10:59.781 Test: blockdev write read 8 blocks ...passed 00:10:59.781 Test: blockdev write read size > 128k ...passed 00:10:59.781 Test: blockdev write read invalid size ...passed 00:10:59.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:59.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:59.781 Test: blockdev write read max offset ...passed 00:10:59.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:59.781 Test: blockdev writev readv 8 blocks ...passed 00:10:59.781 Test: blockdev writev readv 30 x 1block ...passed 00:10:59.781 Test: blockdev writev readv block ...passed 00:10:59.781 Test: blockdev writev readv size > 128k ...passed 00:10:59.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:59.781 Test: blockdev comparev and writev ...passed 00:10:59.781 Test: blockdev nvme passthru rw ...passed 00:10:59.781 Test: blockdev nvme passthru vendor specific ...passed 00:10:59.781 Test: blockdev nvme admin passthru ...passed 00:10:59.781 Test: blockdev copy ...passed 00:10:59.781 Suite: bdevio tests on: TestPT 00:10:59.781 Test: blockdev write read block ...passed 00:10:59.781 Test: blockdev write zeroes read block ...passed 00:10:59.781 Test: blockdev write zeroes read no split ...passed 00:10:59.781 Test: blockdev write zeroes read split ...passed 00:10:59.781 Test: blockdev write zeroes read split partial ...passed 00:10:59.781 Test: blockdev reset ...passed 00:10:59.781 Test: blockdev write read 8 blocks ...passed 00:10:59.781 Test: blockdev write read size > 128k ...passed 00:10:59.781 Test: blockdev write read invalid size ...passed 00:10:59.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:59.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:59.781 Test: blockdev write read max offset ...passed 00:10:59.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:59.781 Test: blockdev writev readv 8 blocks ...passed 00:10:59.781 Test: blockdev writev readv 30 x 1block ...passed 00:10:59.781 Test: blockdev writev readv block ...passed 00:10:59.781 Test: blockdev writev readv size > 128k ...passed 00:10:59.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:59.781 Test: blockdev comparev and writev ...passed 00:10:59.781 Test: blockdev nvme passthru rw ...passed 00:10:59.781 Test: blockdev nvme passthru vendor specific ...passed 00:10:59.781 Test: blockdev nvme admin passthru ...passed 00:10:59.781 Test: blockdev copy ...passed 00:10:59.781 Suite: bdevio tests on: Malloc2p7 00:10:59.781 Test: blockdev write read block ...passed 00:10:59.781 Test: blockdev write zeroes read block ...passed 00:10:59.781 Test: blockdev write zeroes read no split ...passed 00:11:00.040 Test: blockdev write zeroes read split ...passed 00:11:00.040 Test: blockdev write zeroes read split partial ...passed 00:11:00.040 Test: blockdev reset ...passed 00:11:00.040 Test: blockdev write read 8 blocks ...passed 00:11:00.040 Test: blockdev write read size > 128k ...passed 00:11:00.040 Test: blockdev write read invalid size ...passed 00:11:00.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.040 Test: blockdev write read max offset ...passed 00:11:00.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.040 Test: blockdev writev readv 8 blocks ...passed 00:11:00.040 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.040 Test: blockdev writev readv block ...passed 00:11:00.040 Test: blockdev writev readv size > 128k ...passed 00:11:00.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.040 Test: blockdev comparev and writev ...passed 00:11:00.040 Test: blockdev nvme passthru rw ...passed 00:11:00.040 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.040 Test: blockdev nvme admin passthru ...passed 00:11:00.040 Test: blockdev copy ...passed 00:11:00.040 Suite: bdevio tests on: Malloc2p6 00:11:00.040 Test: blockdev write read block ...passed 00:11:00.040 Test: blockdev write zeroes read block ...passed 00:11:00.040 Test: blockdev write zeroes read no split ...passed 00:11:00.040 Test: blockdev write zeroes read split ...passed 00:11:00.040 Test: blockdev write zeroes read split partial ...passed 00:11:00.040 Test: blockdev reset ...passed 00:11:00.040 Test: blockdev write read 8 blocks ...passed 00:11:00.040 Test: blockdev write read size > 128k ...passed 00:11:00.040 Test: blockdev write read invalid size ...passed 00:11:00.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.041 Test: blockdev write read max offset ...passed 00:11:00.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.041 Test: blockdev writev readv 8 blocks ...passed 00:11:00.041 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.041 Test: blockdev writev readv block ...passed 00:11:00.041 Test: blockdev writev readv size > 128k ...passed 00:11:00.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.041 Test: blockdev comparev and writev ...passed 00:11:00.041 Test: blockdev nvme passthru rw ...passed 00:11:00.041 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.041 Test: blockdev nvme admin passthru ...passed 00:11:00.041 Test: blockdev copy ...passed 00:11:00.041 Suite: bdevio tests on: Malloc2p5 00:11:00.041 Test: blockdev write read block ...passed 00:11:00.041 Test: blockdev write zeroes read block ...passed 00:11:00.041 Test: blockdev write zeroes read no split ...passed 00:11:00.041 Test: blockdev write zeroes read split ...passed 00:11:00.041 Test: blockdev write zeroes read split partial ...passed 00:11:00.041 Test: blockdev reset ...passed 00:11:00.041 Test: blockdev write read 8 blocks ...passed 00:11:00.041 Test: blockdev write read size > 128k ...passed 00:11:00.041 Test: blockdev write read invalid size ...passed 00:11:00.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.041 Test: blockdev write read max offset ...passed 00:11:00.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.041 Test: blockdev writev readv 8 blocks ...passed 00:11:00.041 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.041 Test: blockdev writev readv block ...passed 00:11:00.041 Test: blockdev writev readv size > 128k ...passed 00:11:00.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.041 Test: blockdev comparev and writev ...passed 00:11:00.041 Test: blockdev nvme passthru rw ...passed 00:11:00.041 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.041 Test: blockdev nvme admin passthru ...passed 00:11:00.041 Test: blockdev copy ...passed 00:11:00.041 Suite: bdevio tests on: Malloc2p4 00:11:00.041 Test: blockdev write read block ...passed 00:11:00.041 Test: blockdev write zeroes read block ...passed 00:11:00.041 Test: blockdev write zeroes read no split ...passed 00:11:00.300 Test: blockdev write zeroes read split ...passed 00:11:00.300 Test: blockdev write zeroes read split partial ...passed 00:11:00.300 Test: blockdev reset ...passed 00:11:00.300 Test: blockdev write read 8 blocks ...passed 00:11:00.300 Test: blockdev write read size > 128k ...passed 00:11:00.300 Test: blockdev write read invalid size ...passed 00:11:00.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.300 Test: blockdev write read max offset ...passed 00:11:00.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.300 Test: blockdev writev readv 8 blocks ...passed 00:11:00.300 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.300 Test: blockdev writev readv block ...passed 00:11:00.300 Test: blockdev writev readv size > 128k ...passed 00:11:00.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.300 Test: blockdev comparev and writev ...passed 00:11:00.300 Test: blockdev nvme passthru rw ...passed 00:11:00.300 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.300 Test: blockdev nvme admin passthru ...passed 00:11:00.300 Test: blockdev copy ...passed 00:11:00.300 Suite: bdevio tests on: Malloc2p3 00:11:00.300 Test: blockdev write read block ...passed 00:11:00.300 Test: blockdev write zeroes read block ...passed 00:11:00.300 Test: blockdev write zeroes read no split ...passed 00:11:00.300 Test: blockdev write zeroes read split ...passed 00:11:00.300 Test: blockdev write zeroes read split partial ...passed 00:11:00.300 Test: blockdev reset ...passed 00:11:00.300 Test: blockdev write read 8 blocks ...passed 00:11:00.300 Test: blockdev write read size > 128k ...passed 00:11:00.300 Test: blockdev write read invalid size ...passed 00:11:00.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.300 Test: blockdev write read max offset ...passed 00:11:00.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.300 Test: blockdev writev readv 8 blocks ...passed 00:11:00.300 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.300 Test: blockdev writev readv block ...passed 00:11:00.300 Test: blockdev writev readv size > 128k ...passed 00:11:00.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.300 Test: blockdev comparev and writev ...passed 00:11:00.300 Test: blockdev nvme passthru rw ...passed 00:11:00.300 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.300 Test: blockdev nvme admin passthru ...passed 00:11:00.300 Test: blockdev copy ...passed 00:11:00.300 Suite: bdevio tests on: Malloc2p2 00:11:00.300 Test: blockdev write read block ...passed 00:11:00.300 Test: blockdev write zeroes read block ...passed 00:11:00.300 Test: blockdev write zeroes read no split ...passed 00:11:00.300 Test: blockdev write zeroes read split ...passed 00:11:00.300 Test: blockdev write zeroes read split partial ...passed 00:11:00.300 Test: blockdev reset ...passed 00:11:00.300 Test: blockdev write read 8 blocks ...passed 00:11:00.300 Test: blockdev write read size > 128k ...passed 00:11:00.300 Test: blockdev write read invalid size ...passed 00:11:00.300 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.300 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.300 Test: blockdev write read max offset ...passed 00:11:00.300 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.300 Test: blockdev writev readv 8 blocks ...passed 00:11:00.300 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.300 Test: blockdev writev readv block ...passed 00:11:00.300 Test: blockdev writev readv size > 128k ...passed 00:11:00.300 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.300 Test: blockdev comparev and writev ...passed 00:11:00.300 Test: blockdev nvme passthru rw ...passed 00:11:00.300 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.300 Test: blockdev nvme admin passthru ...passed 00:11:00.300 Test: blockdev copy ...passed 00:11:00.300 Suite: bdevio tests on: Malloc2p1 00:11:00.300 Test: blockdev write read block ...passed 00:11:00.300 Test: blockdev write zeroes read block ...passed 00:11:00.300 Test: blockdev write zeroes read no split ...passed 00:11:00.559 Test: blockdev write zeroes read split ...passed 00:11:00.559 Test: blockdev write zeroes read split partial ...passed 00:11:00.559 Test: blockdev reset ...passed 00:11:00.559 Test: blockdev write read 8 blocks ...passed 00:11:00.559 Test: blockdev write read size > 128k ...passed 00:11:00.559 Test: blockdev write read invalid size ...passed 00:11:00.559 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.559 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.559 Test: blockdev write read max offset ...passed 00:11:00.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.559 Test: blockdev writev readv 8 blocks ...passed 00:11:00.559 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.559 Test: blockdev writev readv block ...passed 00:11:00.559 Test: blockdev writev readv size > 128k ...passed 00:11:00.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.559 Test: blockdev comparev and writev ...passed 00:11:00.559 Test: blockdev nvme passthru rw ...passed 00:11:00.559 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.559 Test: blockdev nvme admin passthru ...passed 00:11:00.559 Test: blockdev copy ...passed 00:11:00.559 Suite: bdevio tests on: Malloc2p0 00:11:00.559 Test: blockdev write read block ...passed 00:11:00.559 Test: blockdev write zeroes read block ...passed 00:11:00.559 Test: blockdev write zeroes read no split ...passed 00:11:00.559 Test: blockdev write zeroes read split ...passed 00:11:00.559 Test: blockdev write zeroes read split partial ...passed 00:11:00.559 Test: blockdev reset ...passed 00:11:00.559 Test: blockdev write read 8 blocks ...passed 00:11:00.560 Test: blockdev write read size > 128k ...passed 00:11:00.560 Test: blockdev write read invalid size ...passed 00:11:00.560 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.560 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.560 Test: blockdev write read max offset ...passed 00:11:00.560 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.560 Test: blockdev writev readv 8 blocks ...passed 00:11:00.560 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.560 Test: blockdev writev readv block ...passed 00:11:00.560 Test: blockdev writev readv size > 128k ...passed 00:11:00.560 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.560 Test: blockdev comparev and writev ...passed 00:11:00.560 Test: blockdev nvme passthru rw ...passed 00:11:00.560 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.560 Test: blockdev nvme admin passthru ...passed 00:11:00.560 Test: blockdev copy ...passed 00:11:00.560 Suite: bdevio tests on: Malloc1p1 00:11:00.560 Test: blockdev write read block ...passed 00:11:00.560 Test: blockdev write zeroes read block ...passed 00:11:00.560 Test: blockdev write zeroes read no split ...passed 00:11:00.560 Test: blockdev write zeroes read split ...passed 00:11:00.560 Test: blockdev write zeroes read split partial ...passed 00:11:00.560 Test: blockdev reset ...passed 00:11:00.560 Test: blockdev write read 8 blocks ...passed 00:11:00.560 Test: blockdev write read size > 128k ...passed 00:11:00.560 Test: blockdev write read invalid size ...passed 00:11:00.560 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.560 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.560 Test: blockdev write read max offset ...passed 00:11:00.560 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.560 Test: blockdev writev readv 8 blocks ...passed 00:11:00.560 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.560 Test: blockdev writev readv block ...passed 00:11:00.560 Test: blockdev writev readv size > 128k ...passed 00:11:00.560 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.560 Test: blockdev comparev and writev ...passed 00:11:00.560 Test: blockdev nvme passthru rw ...passed 00:11:00.560 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.560 Test: blockdev nvme admin passthru ...passed 00:11:00.560 Test: blockdev copy ...passed 00:11:00.560 Suite: bdevio tests on: Malloc1p0 00:11:00.560 Test: blockdev write read block ...passed 00:11:00.560 Test: blockdev write zeroes read block ...passed 00:11:00.560 Test: blockdev write zeroes read no split ...passed 00:11:00.560 Test: blockdev write zeroes read split ...passed 00:11:00.819 Test: blockdev write zeroes read split partial ...passed 00:11:00.819 Test: blockdev reset ...passed 00:11:00.819 Test: blockdev write read 8 blocks ...passed 00:11:00.819 Test: blockdev write read size > 128k ...passed 00:11:00.819 Test: blockdev write read invalid size ...passed 00:11:00.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.819 Test: blockdev write read max offset ...passed 00:11:00.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.819 Test: blockdev writev readv 8 blocks ...passed 00:11:00.819 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.819 Test: blockdev writev readv block ...passed 00:11:00.819 Test: blockdev writev readv size > 128k ...passed 00:11:00.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.819 Test: blockdev comparev and writev ...passed 00:11:00.819 Test: blockdev nvme passthru rw ...passed 00:11:00.819 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.819 Test: blockdev nvme admin passthru ...passed 00:11:00.819 Test: blockdev copy ...passed 00:11:00.819 Suite: bdevio tests on: Malloc0 00:11:00.819 Test: blockdev write read block ...passed 00:11:00.819 Test: blockdev write zeroes read block ...passed 00:11:00.820 Test: blockdev write zeroes read no split ...passed 00:11:00.820 Test: blockdev write zeroes read split ...passed 00:11:00.820 Test: blockdev write zeroes read split partial ...passed 00:11:00.820 Test: blockdev reset ...passed 00:11:00.820 Test: blockdev write read 8 blocks ...passed 00:11:00.820 Test: blockdev write read size > 128k ...passed 00:11:00.820 Test: blockdev write read invalid size ...passed 00:11:00.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:00.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:00.820 Test: blockdev write read max offset ...passed 00:11:00.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:00.820 Test: blockdev writev readv 8 blocks ...passed 00:11:00.820 Test: blockdev writev readv 30 x 1block ...passed 00:11:00.820 Test: blockdev writev readv block ...passed 00:11:00.820 Test: blockdev writev readv size > 128k ...passed 00:11:00.820 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:00.820 Test: blockdev comparev and writev ...passed 00:11:00.820 Test: blockdev nvme passthru rw ...passed 00:11:00.820 Test: blockdev nvme passthru vendor specific ...passed 00:11:00.820 Test: blockdev nvme admin passthru ...passed 00:11:00.820 Test: blockdev copy ...passed 00:11:00.820 00:11:00.820 Run Summary: Type Total Ran Passed Failed Inactive 00:11:00.820 suites 16 16 n/a 0 0 00:11:00.820 tests 368 368 368 0 0 00:11:00.820 asserts 2224 2224 2224 0 n/a 00:11:00.820 00:11:00.820 Elapsed time = 3.926 seconds 00:11:00.820 0 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1561067 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1561067 ']' 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1561067 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1561067 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1561067' 00:11:00.820 killing process with pid 1561067 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1561067 00:11:00.820 16:27:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1561067 00:11:04.110 16:28:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:04.110 00:11:04.110 real 0m6.834s 00:11:04.110 user 0m17.380s 00:11:04.110 sys 0m0.868s 00:11:04.110 16:28:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.110 16:28:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:04.110 ************************************ 00:11:04.110 END TEST bdev_bounds 00:11:04.110 ************************************ 00:11:04.110 16:28:00 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:04.110 16:28:00 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:04.110 16:28:00 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.110 16:28:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:04.110 ************************************ 00:11:04.110 START TEST bdev_nbd 00:11:04.110 ************************************ 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1562222 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1562222 /var/tmp/spdk-nbd.sock 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1562222 ']' 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:04.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.110 16:28:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:04.110 [2024-07-24 16:28:00.434044] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:11:04.110 [2024-07-24 16:28:00.434179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.110 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:04.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:04.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:04.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:04.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:04.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:04.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:04.111 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.111 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:04.111 [2024-07-24 16:28:00.663486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.111 [2024-07-24 16:28:00.943193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.047 [2024-07-24 16:28:01.547716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:05.047 [2024-07-24 16:28:01.547779] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:05.047 [2024-07-24 16:28:01.547800] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:05.047 [2024-07-24 16:28:01.555664] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:05.047 [2024-07-24 16:28:01.555707] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:05.047 [2024-07-24 16:28:01.563668] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:05.047 [2024-07-24 16:28:01.563707] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:05.047 [2024-07-24 16:28:01.832333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:05.047 [2024-07-24 16:28:01.832411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.047 [2024-07-24 16:28:01.832434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:11:05.047 [2024-07-24 16:28:01.832451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.047 [2024-07-24 16:28:01.835177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.047 [2024-07-24 16:28:01.835212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:06.423 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:06.423 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:06.423 16:28:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:06.423 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.423 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:06.423 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.424 1+0 records in 00:11:06.424 1+0 records out 00:11:06.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245573 s, 16.7 MB/s 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:06.424 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.682 1+0 records in 00:11:06.682 1+0 records out 00:11:06.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299579 s, 13.7 MB/s 00:11:06.682 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:06.941 1+0 records in 00:11:06.941 1+0 records out 00:11:06.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292455 s, 14.0 MB/s 00:11:06.941 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:07.200 16:28:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:07.200 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:07.200 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.459 1+0 records in 00:11:07.459 1+0 records out 00:11:07.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442195 s, 9.3 MB/s 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:07.459 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.718 1+0 records in 00:11:07.718 1+0 records out 00:11:07.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324272 s, 12.6 MB/s 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:07.718 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:07.977 1+0 records in 00:11:07.977 1+0 records out 00:11:07.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389586 s, 10.5 MB/s 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:07.977 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.236 1+0 records in 00:11:08.236 1+0 records out 00:11:08.236 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050856 s, 8.1 MB/s 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.236 16:28:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:08.495 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.496 1+0 records in 00:11:08.496 1+0 records out 00:11:08.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466559 s, 8.8 MB/s 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.496 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:08.755 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:08.755 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:08.755 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:08.755 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:08.755 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:08.756 1+0 records in 00:11:08.756 1+0 records out 00:11:08.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430085 s, 9.5 MB/s 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:08.756 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.015 1+0 records in 00:11:09.015 1+0 records out 00:11:09.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486498 s, 8.4 MB/s 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.015 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.274 1+0 records in 00:11:09.274 1+0 records out 00:11:09.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467305 s, 8.8 MB/s 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.274 16:28:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.534 1+0 records in 00:11:09.534 1+0 records out 00:11:09.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595255 s, 6.9 MB/s 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.534 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:09.794 1+0 records in 00:11:09.794 1+0 records out 00:11:09.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622963 s, 6.6 MB/s 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:09.794 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.054 1+0 records in 00:11:10.054 1+0 records out 00:11:10.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529656 s, 7.7 MB/s 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:10.054 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.314 1+0 records in 00:11:10.314 1+0 records out 00:11:10.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749244 s, 5.5 MB/s 00:11:10.314 16:28:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:10.314 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.574 1+0 records in 00:11:10.574 1+0 records out 00:11:10.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072475 s, 5.7 MB/s 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:10.574 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd0", 00:11:10.835 "bdev_name": "Malloc0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd1", 00:11:10.835 "bdev_name": "Malloc1p0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd2", 00:11:10.835 "bdev_name": "Malloc1p1" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd3", 00:11:10.835 "bdev_name": "Malloc2p0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd4", 00:11:10.835 "bdev_name": "Malloc2p1" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd5", 00:11:10.835 "bdev_name": "Malloc2p2" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd6", 00:11:10.835 "bdev_name": "Malloc2p3" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd7", 00:11:10.835 "bdev_name": "Malloc2p4" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd8", 00:11:10.835 "bdev_name": "Malloc2p5" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd9", 00:11:10.835 "bdev_name": "Malloc2p6" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd10", 00:11:10.835 "bdev_name": "Malloc2p7" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd11", 00:11:10.835 "bdev_name": "TestPT" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd12", 00:11:10.835 "bdev_name": "raid0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd13", 00:11:10.835 "bdev_name": "concat0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd14", 00:11:10.835 "bdev_name": "raid1" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd15", 00:11:10.835 "bdev_name": "AIO0" 00:11:10.835 } 00:11:10.835 ]' 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd0", 00:11:10.835 "bdev_name": "Malloc0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd1", 00:11:10.835 "bdev_name": "Malloc1p0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd2", 00:11:10.835 "bdev_name": "Malloc1p1" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd3", 00:11:10.835 "bdev_name": "Malloc2p0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd4", 00:11:10.835 "bdev_name": "Malloc2p1" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd5", 00:11:10.835 "bdev_name": "Malloc2p2" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd6", 00:11:10.835 "bdev_name": "Malloc2p3" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd7", 00:11:10.835 "bdev_name": "Malloc2p4" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd8", 00:11:10.835 "bdev_name": "Malloc2p5" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd9", 00:11:10.835 "bdev_name": "Malloc2p6" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd10", 00:11:10.835 "bdev_name": "Malloc2p7" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd11", 00:11:10.835 "bdev_name": "TestPT" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd12", 00:11:10.835 "bdev_name": "raid0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd13", 00:11:10.835 "bdev_name": "concat0" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd14", 00:11:10.835 "bdev_name": "raid1" 00:11:10.835 }, 00:11:10.835 { 00:11:10.835 "nbd_device": "/dev/nbd15", 00:11:10.835 "bdev_name": "AIO0" 00:11:10.835 } 00:11:10.835 ]' 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.835 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.095 16:28:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.355 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.615 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:11.875 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.135 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:12.395 16:28:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.395 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:12.654 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:12.654 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:12.654 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.655 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:12.914 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.174 16:28:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.433 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:13.692 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:13.692 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:13.692 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:13.692 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.692 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.692 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:13.693 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.693 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.693 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.693 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:13.952 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:13.953 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.253 16:28:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:14.513 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:14.772 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:15.032 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:15.033 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:15.033 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:15.293 /dev/nbd0 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.293 1+0 records in 00:11:15.293 1+0 records out 00:11:15.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269764 s, 15.2 MB/s 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:15.293 16:28:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:15.554 /dev/nbd1 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.554 1+0 records in 00:11:15.554 1+0 records out 00:11:15.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332998 s, 12.3 MB/s 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:15.554 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:15.814 /dev/nbd10 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.814 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.814 1+0 records in 00:11:15.814 1+0 records out 00:11:15.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339051 s, 12.1 MB/s 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:15.815 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:16.074 /dev/nbd11 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.074 1+0 records in 00:11:16.074 1+0 records out 00:11:16.074 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025405 s, 16.1 MB/s 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.074 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:16.334 /dev/nbd12 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.334 1+0 records in 00:11:16.334 1+0 records out 00:11:16.334 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401111 s, 10.2 MB/s 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.334 16:28:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:16.593 /dev/nbd13 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.593 1+0 records in 00:11:16.593 1+0 records out 00:11:16.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357456 s, 11.5 MB/s 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.593 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:16.852 /dev/nbd14 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.852 1+0 records in 00:11:16.852 1+0 records out 00:11:16.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475344 s, 8.6 MB/s 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:16.852 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:17.112 /dev/nbd15 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.112 1+0 records in 00:11:17.112 1+0 records out 00:11:17.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439878 s, 9.3 MB/s 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.112 16:28:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:17.371 /dev/nbd2 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.371 1+0 records in 00:11:17.371 1+0 records out 00:11:17.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456564 s, 9.0 MB/s 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.371 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:17.631 /dev/nbd3 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.631 1+0 records in 00:11:17.631 1+0 records out 00:11:17.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612229 s, 6.7 MB/s 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.631 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:17.891 /dev/nbd4 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.891 1+0 records in 00:11:17.891 1+0 records out 00:11:17.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475874 s, 8.6 MB/s 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:17.891 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:18.150 /dev/nbd5 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.150 1+0 records in 00:11:18.150 1+0 records out 00:11:18.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438967 s, 9.3 MB/s 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.150 16:28:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:18.410 /dev/nbd6 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.410 1+0 records in 00:11:18.410 1+0 records out 00:11:18.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640368 s, 6.4 MB/s 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.410 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:18.669 /dev/nbd7 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.669 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.670 1+0 records in 00:11:18.670 1+0 records out 00:11:18.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100356 s, 4.1 MB/s 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.670 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:18.929 /dev/nbd8 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.929 1+0 records in 00:11:18.929 1+0 records out 00:11:18.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469962 s, 8.7 MB/s 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:18.929 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:19.188 /dev/nbd9 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.188 1+0 records in 00:11:19.188 1+0 records out 00:11:19.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000664767 s, 6.2 MB/s 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.188 16:28:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd0", 00:11:19.448 "bdev_name": "Malloc0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd1", 00:11:19.448 "bdev_name": "Malloc1p0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd10", 00:11:19.448 "bdev_name": "Malloc1p1" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd11", 00:11:19.448 "bdev_name": "Malloc2p0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd12", 00:11:19.448 "bdev_name": "Malloc2p1" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd13", 00:11:19.448 "bdev_name": "Malloc2p2" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd14", 00:11:19.448 "bdev_name": "Malloc2p3" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd15", 00:11:19.448 "bdev_name": "Malloc2p4" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd2", 00:11:19.448 "bdev_name": "Malloc2p5" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd3", 00:11:19.448 "bdev_name": "Malloc2p6" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd4", 00:11:19.448 "bdev_name": "Malloc2p7" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd5", 00:11:19.448 "bdev_name": "TestPT" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd6", 00:11:19.448 "bdev_name": "raid0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd7", 00:11:19.448 "bdev_name": "concat0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd8", 00:11:19.448 "bdev_name": "raid1" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd9", 00:11:19.448 "bdev_name": "AIO0" 00:11:19.448 } 00:11:19.448 ]' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd0", 00:11:19.448 "bdev_name": "Malloc0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd1", 00:11:19.448 "bdev_name": "Malloc1p0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd10", 00:11:19.448 "bdev_name": "Malloc1p1" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd11", 00:11:19.448 "bdev_name": "Malloc2p0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd12", 00:11:19.448 "bdev_name": "Malloc2p1" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd13", 00:11:19.448 "bdev_name": "Malloc2p2" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd14", 00:11:19.448 "bdev_name": "Malloc2p3" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd15", 00:11:19.448 "bdev_name": "Malloc2p4" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd2", 00:11:19.448 "bdev_name": "Malloc2p5" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd3", 00:11:19.448 "bdev_name": "Malloc2p6" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd4", 00:11:19.448 "bdev_name": "Malloc2p7" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd5", 00:11:19.448 "bdev_name": "TestPT" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd6", 00:11:19.448 "bdev_name": "raid0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd7", 00:11:19.448 "bdev_name": "concat0" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd8", 00:11:19.448 "bdev_name": "raid1" 00:11:19.448 }, 00:11:19.448 { 00:11:19.448 "nbd_device": "/dev/nbd9", 00:11:19.448 "bdev_name": "AIO0" 00:11:19.448 } 00:11:19.448 ]' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:19.448 /dev/nbd1 00:11:19.448 /dev/nbd10 00:11:19.448 /dev/nbd11 00:11:19.448 /dev/nbd12 00:11:19.448 /dev/nbd13 00:11:19.448 /dev/nbd14 00:11:19.448 /dev/nbd15 00:11:19.448 /dev/nbd2 00:11:19.448 /dev/nbd3 00:11:19.448 /dev/nbd4 00:11:19.448 /dev/nbd5 00:11:19.448 /dev/nbd6 00:11:19.448 /dev/nbd7 00:11:19.448 /dev/nbd8 00:11:19.448 /dev/nbd9' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:19.448 /dev/nbd1 00:11:19.448 /dev/nbd10 00:11:19.448 /dev/nbd11 00:11:19.448 /dev/nbd12 00:11:19.448 /dev/nbd13 00:11:19.448 /dev/nbd14 00:11:19.448 /dev/nbd15 00:11:19.448 /dev/nbd2 00:11:19.448 /dev/nbd3 00:11:19.448 /dev/nbd4 00:11:19.448 /dev/nbd5 00:11:19.448 /dev/nbd6 00:11:19.448 /dev/nbd7 00:11:19.448 /dev/nbd8 00:11:19.448 /dev/nbd9' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:19.448 256+0 records in 00:11:19.448 256+0 records out 00:11:19.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011407 s, 91.9 MB/s 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.448 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:19.707 256+0 records in 00:11:19.707 256+0 records out 00:11:19.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175039 s, 6.0 MB/s 00:11:19.707 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.707 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:19.707 256+0 records in 00:11:19.707 256+0 records out 00:11:19.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149673 s, 7.0 MB/s 00:11:19.707 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.707 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:19.966 256+0 records in 00:11:19.966 256+0 records out 00:11:19.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148343 s, 7.1 MB/s 00:11:19.966 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:19.966 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:20.225 256+0 records in 00:11:20.225 256+0 records out 00:11:20.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126015 s, 8.3 MB/s 00:11:20.225 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.225 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:20.225 256+0 records in 00:11:20.225 256+0 records out 00:11:20.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122052 s, 8.6 MB/s 00:11:20.225 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.225 16:28:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:20.225 256+0 records in 00:11:20.225 256+0 records out 00:11:20.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102822 s, 10.2 MB/s 00:11:20.225 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.225 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:20.483 256+0 records in 00:11:20.483 256+0 records out 00:11:20.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103603 s, 10.1 MB/s 00:11:20.483 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.483 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:20.483 256+0 records in 00:11:20.483 256+0 records out 00:11:20.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104477 s, 10.0 MB/s 00:11:20.483 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.483 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:20.742 256+0 records in 00:11:20.742 256+0 records out 00:11:20.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106866 s, 9.8 MB/s 00:11:20.742 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.742 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:20.742 256+0 records in 00:11:20.742 256+0 records out 00:11:20.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165111 s, 6.4 MB/s 00:11:20.742 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:20.742 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:21.000 256+0 records in 00:11:21.000 256+0 records out 00:11:21.000 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.177435 s, 5.9 MB/s 00:11:21.000 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.000 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:21.259 256+0 records in 00:11:21.260 256+0 records out 00:11:21.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178958 s, 5.9 MB/s 00:11:21.260 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.260 16:28:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:21.260 256+0 records in 00:11:21.260 256+0 records out 00:11:21.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157774 s, 6.6 MB/s 00:11:21.260 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.260 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:21.519 256+0 records in 00:11:21.519 256+0 records out 00:11:21.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102035 s, 10.3 MB/s 00:11:21.519 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.519 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:21.519 256+0 records in 00:11:21.519 256+0 records out 00:11:21.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158491 s, 6.6 MB/s 00:11:21.519 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:21.519 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:21.778 256+0 records in 00:11:21.778 256+0 records out 00:11:21.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101653 s, 10.3 MB/s 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:21.778 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:21.779 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.038 16:28:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.297 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.556 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.815 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.074 16:28:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.333 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.592 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.852 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.111 16:28:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.370 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.629 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.889 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.149 16:28:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.408 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:25.667 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.668 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.668 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.668 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.928 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:26.245 16:28:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:26.245 malloc_lvol_verify 00:11:26.245 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:26.504 a632f506-97c0-481c-a7ef-b794771c6b4a 00:11:26.504 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:26.763 f6b7ac8a-fd17-451e-bca6-80aa75f37bbf 00:11:26.763 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:27.022 /dev/nbd0 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:27.022 mke2fs 1.46.5 (30-Dec-2021) 00:11:27.022 Discarding device blocks: 0/4096 done 00:11:27.022 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:27.022 00:11:27.022 Allocating group tables: 0/1 done 00:11:27.022 Writing inode tables: 0/1 done 00:11:27.022 Creating journal (1024 blocks): done 00:11:27.022 Writing superblocks and filesystem accounting information: 0/1 done 00:11:27.022 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:27.022 16:28:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1562222 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1562222 ']' 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1562222 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1562222 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1562222' 00:11:27.282 killing process with pid 1562222 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1562222 00:11:27.282 16:28:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1562222 00:11:30.577 16:28:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:30.577 00:11:30.577 real 0m26.978s 00:11:30.577 user 0m32.232s 00:11:30.577 sys 0m13.035s 00:11:30.577 16:28:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.577 16:28:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 ************************************ 00:11:30.577 END TEST bdev_nbd 00:11:30.577 ************************************ 00:11:30.577 16:28:27 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:30.577 16:28:27 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:11:30.577 16:28:27 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:11:30.577 16:28:27 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:11:30.577 16:28:27 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:30.577 16:28:27 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.577 16:28:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 ************************************ 00:11:30.577 START TEST bdev_fio 00:11:30.577 ************************************ 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:30.578 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:11:30.578 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:11:30.838 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.839 16:28:27 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:30.839 ************************************ 00:11:30.839 START TEST bdev_fio_rw_verify 00:11:30.839 ************************************ 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:30.839 16:28:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:31.407 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:31.408 fio-3.35 00:11:31.408 Starting 16 threads 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:31.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:31.408 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:43.626 00:11:43.626 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=1567922: Wed Jul 24 16:28:39 2024 00:11:43.626 read: IOPS=100k, BW=392MiB/s (411MB/s)(3922MiB/10001msec) 00:11:43.626 slat (usec): min=3, max=417, avg=32.20, stdev=12.82 00:11:43.626 clat (usec): min=12, max=890, avg=252.91, stdev=113.94 00:11:43.626 lat (usec): min=21, max=906, avg=285.11, stdev=119.72 00:11:43.626 clat percentiles (usec): 00:11:43.626 | 50.000th=[ 247], 99.000th=[ 519], 99.900th=[ 594], 99.990th=[ 701], 00:11:43.626 | 99.999th=[ 775] 00:11:43.626 write: IOPS=159k, BW=620MiB/s (650MB/s)(6113MiB/9867msec); 0 zone resets 00:11:43.626 slat (usec): min=8, max=418, avg=45.08, stdev=12.41 00:11:43.626 clat (usec): min=12, max=1097, avg=298.49, stdev=131.38 00:11:43.626 lat (usec): min=34, max=1213, avg=343.57, stdev=135.94 00:11:43.626 clat percentiles (usec): 00:11:43.626 | 50.000th=[ 289], 99.000th=[ 603], 99.900th=[ 725], 99.990th=[ 791], 00:11:43.626 | 99.999th=[ 922] 00:11:43.626 bw ( KiB/s): min=563424, max=728086, per=98.90%, avg=627468.53, stdev=3106.01, samples=304 00:11:43.626 iops : min=140856, max=182020, avg=156867.05, stdev=776.49, samples=304 00:11:43.626 lat (usec) : 20=0.01%, 50=0.85%, 100=5.24%, 250=38.82%, 500=50.19% 00:11:43.626 lat (usec) : 750=4.87%, 1000=0.03% 00:11:43.626 lat (msec) : 2=0.01% 00:11:43.626 cpu : usr=98.48%, sys=0.82%, ctx=564, majf=0, minf=125555 00:11:43.626 IO depths : 1=12.5%, 2=24.9%, 4=50.1%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:43.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.626 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:43.626 issued rwts: total=1004141,1564991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:43.626 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:43.626 00:11:43.626 Run status group 0 (all jobs): 00:11:43.627 READ: bw=392MiB/s (411MB/s), 392MiB/s-392MiB/s (411MB/s-411MB/s), io=3922MiB (4113MB), run=10001-10001msec 00:11:43.627 WRITE: bw=620MiB/s (650MB/s), 620MiB/s-620MiB/s (650MB/s-650MB/s), io=6113MiB (6410MB), run=9867-9867msec 00:11:45.534 ----------------------------------------------------- 00:11:45.534 Suppressions used: 00:11:45.534 count bytes template 00:11:45.534 16 140 /usr/src/fio/parse.c 00:11:45.534 9001 864096 /usr/src/fio/iolog.c 00:11:45.534 1 8 libtcmalloc_minimal.so 00:11:45.534 1 904 libcrypto.so 00:11:45.534 ----------------------------------------------------- 00:11:45.534 00:11:45.534 00:11:45.534 real 0m14.825s 00:11:45.534 user 2m54.844s 00:11:45.534 sys 0m2.946s 00:11:45.534 16:28:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.534 16:28:42 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:45.534 ************************************ 00:11:45.534 END TEST bdev_fio_rw_verify 00:11:45.534 ************************************ 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:11:45.534 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:45.536 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "8065f432-2179-4004-b124-80c78e6ebb5c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8065f432-2179-4004-b124-80c78e6ebb5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8db9d1e3-1334-5c4d-8d55-dbb274b1f5c6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8db9d1e3-1334-5c4d-8d55-dbb274b1f5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2603ddc3-eb4c-515c-bd9b-12b15b35dc31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2603ddc3-eb4c-515c-bd9b-12b15b35dc31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "0773fd86-7eae-58d0-b046-280cf5787b29"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0773fd86-7eae-58d0-b046-280cf5787b29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d55510ff-ef96-5be7-914b-361534477547"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d55510ff-ef96-5be7-914b-361534477547",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6b16a204-989d-58e6-97d1-2150a62c42af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b16a204-989d-58e6-97d1-2150a62c42af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "38db5ee5-bf8e-586d-8281-1a9eccc1f11a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38db5ee5-bf8e-586d-8281-1a9eccc1f11a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a13a9fe7-04a6-51f0-9a7c-963ba5612a05"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a13a9fe7-04a6-51f0-9a7c-963ba5612a05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "121c8ae5-a756-5a7a-a5d9-8f275c708cc2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "121c8ae5-a756-5a7a-a5d9-8f275c708cc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "07c66f84-6e10-51ff-a593-098911dfb2f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07c66f84-6e10-51ff-a593-098911dfb2f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "34d79038-6b89-5da5-a67c-095fddef7bef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "34d79038-6b89-5da5-a67c-095fddef7bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f8e3e86d-60ad-5dd2-9f56-54c78ae66ccc"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f8e3e86d-60ad-5dd2-9f56-54c78ae66ccc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d9d16e54-0aed-4930-bdb4-7e2861ebfe67"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d9d16e54-0aed-4930-bdb4-7e2861ebfe67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d9d16e54-0aed-4930-bdb4-7e2861ebfe67",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2a002aaa-bb1a-4abf-b680-6fd14eb5fb44",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "a9d71a2a-6659-4da1-a825-1751cfdd827d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "d50051ac-5410-45d7-b24d-293d9fcfa556",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a4397356-45e0-4209-b21d-faf19e8d7f39",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c6e56614-5c68-48db-824e-b627920f64c1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c6e56614-5c68-48db-824e-b627920f64c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c6e56614-5c68-48db-824e-b627920f64c1",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "ffc64c05-2cb6-4f32-993d-cb82a31f1030",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "766362e4-544a-4760-81b0-42a2f2b52549",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "84d0b0a2-94a4-4e5b-b59e-091b598d8ae0"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "84d0b0a2-94a4-4e5b-b59e-091b598d8ae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:45.797 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:11:45.797 Malloc1p0 00:11:45.797 Malloc1p1 00:11:45.797 Malloc2p0 00:11:45.797 Malloc2p1 00:11:45.797 Malloc2p2 00:11:45.797 Malloc2p3 00:11:45.797 Malloc2p4 00:11:45.797 Malloc2p5 00:11:45.798 Malloc2p6 00:11:45.798 Malloc2p7 00:11:45.798 TestPT 00:11:45.798 raid0 00:11:45.798 concat0 ]] 00:11:45.798 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "8065f432-2179-4004-b124-80c78e6ebb5c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8065f432-2179-4004-b124-80c78e6ebb5c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8db9d1e3-1334-5c4d-8d55-dbb274b1f5c6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8db9d1e3-1334-5c4d-8d55-dbb274b1f5c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2603ddc3-eb4c-515c-bd9b-12b15b35dc31"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2603ddc3-eb4c-515c-bd9b-12b15b35dc31",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "0773fd86-7eae-58d0-b046-280cf5787b29"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0773fd86-7eae-58d0-b046-280cf5787b29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "d55510ff-ef96-5be7-914b-361534477547"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d55510ff-ef96-5be7-914b-361534477547",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6b16a204-989d-58e6-97d1-2150a62c42af"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6b16a204-989d-58e6-97d1-2150a62c42af",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "38db5ee5-bf8e-586d-8281-1a9eccc1f11a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "38db5ee5-bf8e-586d-8281-1a9eccc1f11a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a13a9fe7-04a6-51f0-9a7c-963ba5612a05"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a13a9fe7-04a6-51f0-9a7c-963ba5612a05",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "121c8ae5-a756-5a7a-a5d9-8f275c708cc2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "121c8ae5-a756-5a7a-a5d9-8f275c708cc2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "07c66f84-6e10-51ff-a593-098911dfb2f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07c66f84-6e10-51ff-a593-098911dfb2f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "34d79038-6b89-5da5-a67c-095fddef7bef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "34d79038-6b89-5da5-a67c-095fddef7bef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "f8e3e86d-60ad-5dd2-9f56-54c78ae66ccc"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f8e3e86d-60ad-5dd2-9f56-54c78ae66ccc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d9d16e54-0aed-4930-bdb4-7e2861ebfe67"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d9d16e54-0aed-4930-bdb4-7e2861ebfe67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d9d16e54-0aed-4930-bdb4-7e2861ebfe67",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "2a002aaa-bb1a-4abf-b680-6fd14eb5fb44",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "a9d71a2a-6659-4da1-a825-1751cfdd827d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3a5ce5db-8fd2-44c8-9b97-62ee77ce5cb3",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "d50051ac-5410-45d7-b24d-293d9fcfa556",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "a4397356-45e0-4209-b21d-faf19e8d7f39",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "c6e56614-5c68-48db-824e-b627920f64c1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c6e56614-5c68-48db-824e-b627920f64c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c6e56614-5c68-48db-824e-b627920f64c1",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "ffc64c05-2cb6-4f32-993d-cb82a31f1030",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "766362e4-544a-4760-81b0-42a2f2b52549",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "84d0b0a2-94a4-4e5b-b59e-091b598d8ae0"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "84d0b0a2-94a4-4e5b-b59e-091b598d8ae0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:11:45.799 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.800 16:28:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:45.800 ************************************ 00:11:45.800 START TEST bdev_fio_trim 00:11:45.800 ************************************ 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:45.800 16:28:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:46.366 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:46.366 fio-3.35 00:11:46.366 Starting 14 threads 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:46.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:46.626 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:58.836 00:11:58.836 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=1570701: Wed Jul 24 16:28:54 2024 00:11:58.836 write: IOPS=136k, BW=531MiB/s (557MB/s)(5313MiB/10001msec); 0 zone resets 00:11:58.836 slat (usec): min=8, max=218, avg=36.46, stdev= 9.31 00:11:58.836 clat (usec): min=24, max=1242, avg=254.15, stdev=84.55 00:11:58.836 lat (usec): min=38, max=1334, avg=290.61, stdev=87.15 00:11:58.836 clat percentiles (usec): 00:11:58.836 | 50.000th=[ 247], 99.000th=[ 445], 99.900th=[ 506], 99.990th=[ 668], 00:11:58.836 | 99.999th=[ 955] 00:11:58.836 bw ( KiB/s): min=486336, max=694400, per=100.00%, avg=545791.58, stdev=4629.69, samples=266 00:11:58.836 iops : min=121584, max=173600, avg=136447.74, stdev=1157.42, samples=266 00:11:58.836 trim: IOPS=136k, BW=531MiB/s (557MB/s)(5313MiB/10001msec); 0 zone resets 00:11:58.836 slat (usec): min=6, max=402, avg=25.64, stdev= 6.79 00:11:58.836 clat (usec): min=38, max=1335, avg=290.79, stdev=87.15 00:11:58.836 lat (usec): min=49, max=1378, avg=316.43, stdev=89.27 00:11:58.836 clat percentiles (usec): 00:11:58.836 | 50.000th=[ 285], 99.000th=[ 486], 99.900th=[ 553], 99.990th=[ 742], 00:11:58.836 | 99.999th=[ 1057] 00:11:58.836 bw ( KiB/s): min=486336, max=694400, per=100.00%, avg=545791.58, stdev=4629.69, samples=266 00:11:58.836 iops : min=121584, max=173600, avg=136447.74, stdev=1157.42, samples=266 00:11:58.836 lat (usec) : 50=0.01%, 100=0.68%, 250=43.14%, 500=55.81%, 750=0.36% 00:11:58.836 lat (usec) : 1000=0.01% 00:11:58.836 lat (msec) : 2=0.01% 00:11:58.836 cpu : usr=99.59%, sys=0.03%, ctx=549, majf=0, minf=15701 00:11:58.836 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:58.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.836 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:58.836 issued rwts: total=0,1360072,1360073,0 short=0,0,0,0 dropped=0,0,0,0 00:11:58.836 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:58.836 00:11:58.836 Run status group 0 (all jobs): 00:11:58.836 WRITE: bw=531MiB/s (557MB/s), 531MiB/s-531MiB/s (557MB/s-557MB/s), io=5313MiB (5571MB), run=10001-10001msec 00:11:58.836 TRIM: bw=531MiB/s (557MB/s), 531MiB/s-531MiB/s (557MB/s-557MB/s), io=5313MiB (5571MB), run=10001-10001msec 00:12:01.460 ----------------------------------------------------- 00:12:01.460 Suppressions used: 00:12:01.460 count bytes template 00:12:01.460 14 129 /usr/src/fio/parse.c 00:12:01.460 1 8 libtcmalloc_minimal.so 00:12:01.460 1 904 libcrypto.so 00:12:01.460 ----------------------------------------------------- 00:12:01.460 00:12:01.460 00:12:01.460 real 0m15.676s 00:12:01.460 user 2m38.025s 00:12:01.460 sys 0m1.437s 00:12:01.460 16:28:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.460 16:28:58 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:01.460 ************************************ 00:12:01.460 END TEST bdev_fio_trim 00:12:01.460 ************************************ 00:12:01.460 16:28:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:12:01.460 16:28:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:01.460 16:28:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:12:01.460 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:12:01.460 16:28:58 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:12:01.460 00:12:01.460 real 0m30.839s 00:12:01.460 user 5m33.061s 00:12:01.460 sys 0m4.561s 00:12:01.460 16:28:58 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.460 16:28:58 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:01.460 ************************************ 00:12:01.460 END TEST bdev_fio 00:12:01.460 ************************************ 00:12:01.460 16:28:58 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:01.460 16:28:58 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:01.460 16:28:58 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:01.460 16:28:58 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.460 16:28:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:01.460 ************************************ 00:12:01.460 START TEST bdev_verify 00:12:01.460 ************************************ 00:12:01.460 16:28:58 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:01.720 [2024-07-24 16:28:58.381908] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:12:01.720 [2024-07-24 16:28:58.382025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572921 ] 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:01.720 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:01.720 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:01.979 [2024-07-24 16:28:58.608636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:02.237 [2024-07-24 16:28:58.872846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.237 [2024-07-24 16:28:58.872854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.804 [2024-07-24 16:28:59.462567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:02.805 [2024-07-24 16:28:59.462643] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:02.805 [2024-07-24 16:28:59.462664] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:02.805 [2024-07-24 16:28:59.470559] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:02.805 [2024-07-24 16:28:59.470607] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:02.805 [2024-07-24 16:28:59.478561] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:02.805 [2024-07-24 16:28:59.478602] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:03.063 [2024-07-24 16:28:59.741006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:03.063 [2024-07-24 16:28:59.741070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:03.063 [2024-07-24 16:28:59.741092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:12:03.063 [2024-07-24 16:28:59.741108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:03.063 [2024-07-24 16:28:59.743868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:03.063 [2024-07-24 16:28:59.743905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:03.631 Running I/O for 5 seconds... 00:12:08.905 00:12:08.905 Latency(us) 00:12:08.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:08.905 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x0 length 0x1000 00:12:08.905 Malloc0 : 5.19 1060.93 4.14 0.00 0.00 120400.16 714.34 286890.39 00:12:08.905 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x1000 length 0x1000 00:12:08.905 Malloc0 : 5.17 1039.34 4.06 0.00 0.00 122902.77 609.48 446273.95 00:12:08.905 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x0 length 0x800 00:12:08.905 Malloc1p0 : 5.19 542.52 2.12 0.00 0.00 234479.49 3670.02 271790.90 00:12:08.905 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x800 length 0x800 00:12:08.905 Malloc1p0 : 5.17 544.18 2.13 0.00 0.00 233806.67 3643.80 256691.40 00:12:08.905 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x0 length 0x800 00:12:08.905 Malloc1p1 : 5.19 542.29 2.12 0.00 0.00 233739.68 3538.94 261724.57 00:12:08.905 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x800 length 0x800 00:12:08.905 Malloc1p1 : 5.18 543.94 2.12 0.00 0.00 233048.61 3512.73 251658.24 00:12:08.905 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x0 length 0x200 00:12:08.905 Malloc2p0 : 5.19 542.06 2.12 0.00 0.00 233069.07 3538.94 253335.96 00:12:08.905 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x200 length 0x200 00:12:08.905 Malloc2p0 : 5.18 543.70 2.12 0.00 0.00 232362.89 3538.94 241591.91 00:12:08.905 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.905 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p1 : 5.20 541.83 2.12 0.00 0.00 232352.81 3512.73 249980.52 00:12:08.906 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p1 : 5.18 543.45 2.12 0.00 0.00 231646.83 3512.73 233203.30 00:12:08.906 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p2 : 5.20 541.60 2.12 0.00 0.00 231649.05 3512.73 243269.63 00:12:08.906 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p2 : 5.18 543.20 2.12 0.00 0.00 230952.44 3512.73 228170.14 00:12:08.906 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p3 : 5.20 541.38 2.11 0.00 0.00 230933.72 3565.16 238236.47 00:12:08.906 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p3 : 5.19 542.96 2.12 0.00 0.00 230253.20 3538.94 221459.25 00:12:08.906 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p4 : 5.20 541.15 2.11 0.00 0.00 230229.45 3512.73 233203.30 00:12:08.906 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p4 : 5.19 542.73 2.12 0.00 0.00 229545.60 3512.73 218103.81 00:12:08.906 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p5 : 5.21 540.92 2.11 0.00 0.00 229514.84 3722.44 229847.86 00:12:08.906 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p5 : 5.28 557.17 2.18 0.00 0.00 222918.69 3670.02 211392.92 00:12:08.906 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p6 : 5.29 556.42 2.17 0.00 0.00 222473.79 3565.16 224814.69 00:12:08.906 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p6 : 5.29 556.93 2.18 0.00 0.00 222278.79 3565.16 208037.48 00:12:08.906 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x200 00:12:08.906 Malloc2p7 : 5.30 555.86 2.17 0.00 0.00 221970.07 3486.52 219781.53 00:12:08.906 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x200 length 0x200 00:12:08.906 Malloc2p7 : 5.29 556.68 2.17 0.00 0.00 221630.99 3486.52 204682.04 00:12:08.906 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x1000 00:12:08.906 TestPT : 5.30 536.20 2.09 0.00 0.00 228450.43 21390.95 221459.25 00:12:08.906 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x1000 length 0x1000 00:12:08.906 TestPT : 5.29 532.23 2.08 0.00 0.00 230755.23 21286.09 286890.39 00:12:08.906 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x2000 00:12:08.906 raid0 : 5.30 555.17 2.17 0.00 0.00 220348.45 3460.30 190421.40 00:12:08.906 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x2000 length 0x2000 00:12:08.906 raid0 : 5.30 555.86 2.17 0.00 0.00 220138.26 3486.52 174483.05 00:12:08.906 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x2000 00:12:08.906 concat0 : 5.31 554.94 2.17 0.00 0.00 219734.55 3407.87 184549.38 00:12:08.906 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x2000 length 0x2000 00:12:08.906 concat0 : 5.30 555.36 2.17 0.00 0.00 219660.43 3355.44 168611.02 00:12:08.906 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x1000 00:12:08.906 raid1 : 5.31 554.56 2.17 0.00 0.00 219182.47 4141.88 178677.35 00:12:08.906 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x1000 length 0x1000 00:12:08.906 raid1 : 5.30 555.08 2.17 0.00 0.00 219054.09 4299.16 176160.77 00:12:08.906 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x0 length 0x4e2 00:12:08.906 AIO0 : 5.31 554.41 2.17 0.00 0.00 218580.46 1454.90 183710.52 00:12:08.906 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:08.906 Verification LBA range: start 0x4e2 length 0x4e2 00:12:08.906 AIO0 : 5.31 554.89 2.17 0.00 0.00 218448.91 1428.68 182871.65 00:12:08.906 =================================================================================================================== 00:12:08.906 Total : 18529.93 72.38 0.00 0.00 214879.84 609.48 446273.95 00:12:12.194 00:12:12.194 real 0m10.694s 00:12:12.194 user 0m19.423s 00:12:12.194 sys 0m0.574s 00:12:12.194 16:29:08 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.194 16:29:08 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:12.194 ************************************ 00:12:12.194 END TEST bdev_verify 00:12:12.194 ************************************ 00:12:12.194 16:29:09 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:12.194 16:29:09 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:12.194 16:29:09 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.194 16:29:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:12.453 ************************************ 00:12:12.453 START TEST bdev_verify_big_io 00:12:12.453 ************************************ 00:12:12.453 16:29:09 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:12.453 [2024-07-24 16:29:09.165920] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:12:12.453 [2024-07-24 16:29:09.166033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574728 ] 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:12.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:12.453 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:12.712 [2024-07-24 16:29:09.393757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.970 [2024-07-24 16:29:09.665458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.971 [2024-07-24 16:29:09.665462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.538 [2024-07-24 16:29:10.233715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:13.538 [2024-07-24 16:29:10.233791] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:13.538 [2024-07-24 16:29:10.233813] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:13.538 [2024-07-24 16:29:10.241711] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:13.538 [2024-07-24 16:29:10.241758] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:13.538 [2024-07-24 16:29:10.249714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:13.539 [2024-07-24 16:29:10.249752] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:13.797 [2024-07-24 16:29:10.514360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:13.797 [2024-07-24 16:29:10.514425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.797 [2024-07-24 16:29:10.514447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:12:13.797 [2024-07-24 16:29:10.514462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.797 [2024-07-24 16:29:10.517255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.797 [2024-07-24 16:29:10.517290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:14.366 [2024-07-24 16:29:11.033161] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.038473] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.044222] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.049727] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.055577] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.061340] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.066602] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.072346] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.077519] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.083190] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.088416] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.094187] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.099578] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.105252] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.110574] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:14.366 [2024-07-24 16:29:11.116385] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:14.625 [2024-07-24 16:29:11.249387] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:14.625 [2024-07-24 16:29:11.259820] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:14.625 Running I/O for 5 seconds... 00:12:22.741 00:12:22.741 Latency(us) 00:12:22.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.741 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x0 length 0x100 00:12:22.741 Malloc0 : 5.83 175.65 10.98 0.00 0.00 714037.66 845.41 1919313.51 00:12:22.741 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x100 length 0x100 00:12:22.741 Malloc0 : 5.95 150.68 9.42 0.00 0.00 833295.16 838.86 2281701.38 00:12:22.741 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x0 length 0x80 00:12:22.741 Malloc1p0 : 6.87 34.95 2.18 0.00 0.00 3239346.03 1559.76 5449239.76 00:12:22.741 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x80 length 0x80 00:12:22.741 Malloc1p0 : 6.27 87.46 5.47 0.00 0.00 1349037.86 2516.58 2697776.33 00:12:22.741 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x0 length 0x80 00:12:22.741 Malloc1p1 : 6.87 34.94 2.18 0.00 0.00 3131450.91 1468.01 5234491.39 00:12:22.741 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x80 length 0x80 00:12:22.741 Malloc1p1 : 6.66 36.02 2.25 0.00 0.00 3127809.95 1481.11 5368709.12 00:12:22.741 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x0 length 0x20 00:12:22.741 Malloc2p0 : 6.22 25.70 1.61 0.00 0.00 1105573.98 635.70 2107218.33 00:12:22.741 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x20 length 0x20 00:12:22.741 Malloc2p0 : 6.20 23.24 1.45 0.00 0.00 1209381.79 635.70 1986422.37 00:12:22.741 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.741 Verification LBA range: start 0x0 length 0x20 00:12:22.741 Malloc2p1 : 6.23 25.70 1.61 0.00 0.00 1095712.31 638.98 2080374.78 00:12:22.742 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p1 : 6.20 23.24 1.45 0.00 0.00 1198492.84 642.25 1959578.83 00:12:22.742 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x20 00:12:22.742 Malloc2p2 : 6.23 25.69 1.61 0.00 0.00 1085823.01 638.98 2053531.24 00:12:22.742 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p2 : 6.27 25.53 1.60 0.00 0.00 1100008.62 642.25 1946157.06 00:12:22.742 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x20 00:12:22.742 Malloc2p3 : 6.23 25.69 1.61 0.00 0.00 1075217.56 638.98 2013265.92 00:12:22.742 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p3 : 6.27 25.52 1.60 0.00 0.00 1090271.75 638.98 1919313.51 00:12:22.742 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x20 00:12:22.742 Malloc2p4 : 6.23 25.68 1.61 0.00 0.00 1065328.96 645.53 1986422.37 00:12:22.742 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p4 : 6.27 25.52 1.59 0.00 0.00 1081624.79 642.25 1892469.96 00:12:22.742 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x20 00:12:22.742 Malloc2p5 : 6.23 25.68 1.60 0.00 0.00 1056100.08 642.25 1973000.60 00:12:22.742 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p5 : 6.27 25.51 1.59 0.00 0.00 1071819.15 645.53 1865626.42 00:12:22.742 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x20 00:12:22.742 Malloc2p6 : 6.23 25.67 1.60 0.00 0.00 1046070.04 655.36 1946157.06 00:12:22.742 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p6 : 6.27 25.51 1.59 0.00 0.00 1062307.52 635.70 1838782.87 00:12:22.742 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x20 00:12:22.742 Malloc2p7 : 6.23 25.66 1.60 0.00 0.00 1036103.85 648.81 1919313.51 00:12:22.742 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x20 length 0x20 00:12:22.742 Malloc2p7 : 6.27 25.50 1.59 0.00 0.00 1052475.69 655.36 1811939.33 00:12:22.742 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x100 00:12:22.742 TestPT : 6.92 39.31 2.46 0.00 0.00 2587285.78 1513.88 4804994.66 00:12:22.742 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x100 length 0x100 00:12:22.742 TestPT : 6.71 33.96 2.12 0.00 0.00 3016139.64 89338.68 3489660.93 00:12:22.742 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x200 00:12:22.742 raid0 : 6.90 41.74 2.61 0.00 0.00 2386742.12 1605.63 4617089.84 00:12:22.742 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x200 length 0x200 00:12:22.742 raid0 : 6.59 41.26 2.58 0.00 0.00 2415268.79 1599.08 4751307.57 00:12:22.742 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x200 00:12:22.742 concat0 : 6.69 55.12 3.45 0.00 0.00 1775983.62 1592.52 4456028.57 00:12:22.742 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x200 length 0x200 00:12:22.742 concat0 : 6.76 47.31 2.96 0.00 0.00 2065842.64 1579.42 4563402.75 00:12:22.742 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x100 00:12:22.742 raid1 : 6.89 71.95 4.50 0.00 0.00 1329239.59 2018.51 4268123.75 00:12:22.742 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x100 length 0x100 00:12:22.742 raid1 : 6.77 52.02 3.25 0.00 0.00 1830466.34 2044.72 4402341.48 00:12:22.742 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x0 length 0x4e 00:12:22.742 AIO0 : 6.90 56.25 3.52 0.00 0.00 1004707.48 773.32 2764885.20 00:12:22.742 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:22.742 Verification LBA range: start 0x4e length 0x4e 00:12:22.742 AIO0 : 6.87 75.65 4.73 0.00 0.00 753260.62 766.77 2831994.06 00:12:22.742 =================================================================================================================== 00:12:22.742 Total : 1439.33 89.96 0.00 0.00 1451545.64 635.70 5449239.76 00:12:25.334 00:12:25.334 real 0m12.796s 00:12:25.334 user 0m23.602s 00:12:25.334 sys 0m0.622s 00:12:25.334 16:29:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.334 16:29:21 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.334 ************************************ 00:12:25.334 END TEST bdev_verify_big_io 00:12:25.334 ************************************ 00:12:25.334 16:29:21 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:25.334 16:29:21 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:25.334 16:29:21 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.334 16:29:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:25.334 ************************************ 00:12:25.334 START TEST bdev_write_zeroes 00:12:25.334 ************************************ 00:12:25.334 16:29:21 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:25.334 [2024-07-24 16:29:22.043090] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:12:25.334 [2024-07-24 16:29:22.043214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1577024 ] 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:25.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:25.334 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:25.593 [2024-07-24 16:29:22.267282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.851 [2024-07-24 16:29:22.529268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.418 [2024-07-24 16:29:23.140089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:26.418 [2024-07-24 16:29:23.140179] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:26.418 [2024-07-24 16:29:23.140200] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:26.418 [2024-07-24 16:29:23.148066] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:26.418 [2024-07-24 16:29:23.148108] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:26.418 [2024-07-24 16:29:23.156077] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:26.418 [2024-07-24 16:29:23.156117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:26.683 [2024-07-24 16:29:23.430622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:26.683 [2024-07-24 16:29:23.430685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.683 [2024-07-24 16:29:23.430707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:12:26.683 [2024-07-24 16:29:23.430723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.683 [2024-07-24 16:29:23.433449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.683 [2024-07-24 16:29:23.433483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:27.253 Running I/O for 1 seconds... 00:12:28.628 00:12:28.628 Latency(us) 00:12:28.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.628 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc0 : 1.05 4992.21 19.50 0.00 0.00 25623.45 625.87 43201.33 00:12:28.628 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc1p0 : 1.05 4985.23 19.47 0.00 0.00 25614.58 897.84 42362.47 00:12:28.628 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc1p1 : 1.05 4978.34 19.45 0.00 0.00 25594.22 878.18 41313.89 00:12:28.628 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p0 : 1.06 4971.46 19.42 0.00 0.00 25577.21 878.18 40475.03 00:12:28.628 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p1 : 1.06 4964.62 19.39 0.00 0.00 25555.14 878.18 39636.17 00:12:28.628 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p2 : 1.06 4957.78 19.37 0.00 0.00 25535.13 891.29 38587.60 00:12:28.628 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p3 : 1.06 4950.97 19.34 0.00 0.00 25514.96 878.18 37748.74 00:12:28.628 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p4 : 1.06 4944.16 19.31 0.00 0.00 25494.40 878.18 36909.88 00:12:28.628 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p5 : 1.06 4937.39 19.29 0.00 0.00 25475.44 884.74 36071.01 00:12:28.628 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p6 : 1.06 4930.65 19.26 0.00 0.00 25459.87 878.18 35232.15 00:12:28.628 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 Malloc2p7 : 1.07 4923.88 19.23 0.00 0.00 25438.01 878.18 34393.29 00:12:28.628 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 TestPT : 1.07 4917.18 19.21 0.00 0.00 25413.98 917.50 33344.72 00:12:28.628 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 raid0 : 1.07 4909.00 19.18 0.00 0.00 25380.10 1671.17 31457.28 00:12:28.628 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 concat0 : 1.07 4901.32 19.15 0.00 0.00 25325.74 1651.51 29779.56 00:12:28.628 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 raid1 : 1.07 4891.60 19.11 0.00 0.00 25262.09 2686.98 27053.26 00:12:28.628 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:28.628 AIO0 : 1.07 4885.89 19.09 0.00 0.00 25168.18 996.15 26214.40 00:12:28.628 =================================================================================================================== 00:12:28.628 Total : 79041.68 308.76 0.00 0.00 25464.53 625.87 43201.33 00:12:31.911 00:12:31.911 real 0m6.132s 00:12:31.911 user 0m5.494s 00:12:31.911 sys 0m0.538s 00:12:31.911 16:29:28 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.911 16:29:28 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:31.911 ************************************ 00:12:31.911 END TEST bdev_write_zeroes 00:12:31.911 ************************************ 00:12:31.911 16:29:28 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:31.911 16:29:28 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:31.911 16:29:28 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.911 16:29:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:31.911 ************************************ 00:12:31.911 START TEST bdev_json_nonenclosed 00:12:31.911 ************************************ 00:12:31.911 16:29:28 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:31.911 [2024-07-24 16:29:28.254997] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:12:31.911 [2024-07-24 16:29:28.255110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578096 ] 00:12:31.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.911 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:31.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.911 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:31.911 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:31.912 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:31.912 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:31.912 [2024-07-24 16:29:28.483264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.912 [2024-07-24 16:29:28.761088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.912 [2024-07-24 16:29:28.761189] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:31.912 [2024-07-24 16:29:28.761216] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:31.912 [2024-07-24 16:29:28.761232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:32.848 00:12:32.848 real 0m1.204s 00:12:32.848 user 0m0.920s 00:12:32.848 sys 0m0.278s 00:12:32.848 16:29:29 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.848 16:29:29 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:32.848 ************************************ 00:12:32.848 END TEST bdev_json_nonenclosed 00:12:32.848 ************************************ 00:12:32.848 16:29:29 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.848 16:29:29 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:32.848 16:29:29 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.848 16:29:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:32.848 ************************************ 00:12:32.848 START TEST bdev_json_nonarray 00:12:32.848 ************************************ 00:12:32.848 16:29:29 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:32.848 [2024-07-24 16:29:29.537726] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:12:32.848 [2024-07-24 16:29:29.537841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578241 ] 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:32.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:32.848 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:33.107 [2024-07-24 16:29:29.765655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.366 [2024-07-24 16:29:30.051687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.366 [2024-07-24 16:29:30.051781] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:33.366 [2024-07-24 16:29:30.051809] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:33.366 [2024-07-24 16:29:30.051824] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:33.933 00:12:33.933 real 0m1.197s 00:12:33.933 user 0m0.917s 00:12:33.933 sys 0m0.273s 00:12:33.933 16:29:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.933 16:29:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:33.933 ************************************ 00:12:33.933 END TEST bdev_json_nonarray 00:12:33.933 ************************************ 00:12:33.933 16:29:30 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:12:33.933 16:29:30 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:12:33.933 16:29:30 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.933 16:29:30 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.933 16:29:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:33.933 ************************************ 00:12:33.933 START TEST bdev_qos 00:12:33.933 ************************************ 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=1578488 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 1578488' 00:12:33.933 Process qos testing pid: 1578488 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 1578488 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 1578488 ']' 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:33.933 16:29:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:34.192 [2024-07-24 16:29:30.806720] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:12:34.192 [2024-07-24 16:29:30.806842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578488 ] 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:34.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:34.192 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:34.192 [2024-07-24 16:29:31.019586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.449 [2024-07-24 16:29:31.293064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.014 16:29:31 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:35.014 16:29:31 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:12:35.014 16:29:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:35.014 16:29:31 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.014 16:29:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.272 Malloc_0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.272 [ 00:12:35.272 { 00:12:35.272 "name": "Malloc_0", 00:12:35.272 "aliases": [ 00:12:35.272 "9b545ec4-1422-407b-98de-893190ced637" 00:12:35.272 ], 00:12:35.272 "product_name": "Malloc disk", 00:12:35.272 "block_size": 512, 00:12:35.272 "num_blocks": 262144, 00:12:35.272 "uuid": "9b545ec4-1422-407b-98de-893190ced637", 00:12:35.272 "assigned_rate_limits": { 00:12:35.272 "rw_ios_per_sec": 0, 00:12:35.272 "rw_mbytes_per_sec": 0, 00:12:35.272 "r_mbytes_per_sec": 0, 00:12:35.272 "w_mbytes_per_sec": 0 00:12:35.272 }, 00:12:35.272 "claimed": false, 00:12:35.272 "zoned": false, 00:12:35.272 "supported_io_types": { 00:12:35.272 "read": true, 00:12:35.272 "write": true, 00:12:35.272 "unmap": true, 00:12:35.272 "flush": true, 00:12:35.272 "reset": true, 00:12:35.272 "nvme_admin": false, 00:12:35.272 "nvme_io": false, 00:12:35.272 "nvme_io_md": false, 00:12:35.272 "write_zeroes": true, 00:12:35.272 "zcopy": true, 00:12:35.272 "get_zone_info": false, 00:12:35.272 "zone_management": false, 00:12:35.272 "zone_append": false, 00:12:35.272 "compare": false, 00:12:35.272 "compare_and_write": false, 00:12:35.272 "abort": true, 00:12:35.272 "seek_hole": false, 00:12:35.272 "seek_data": false, 00:12:35.272 "copy": true, 00:12:35.272 "nvme_iov_md": false 00:12:35.272 }, 00:12:35.272 "memory_domains": [ 00:12:35.272 { 00:12:35.272 "dma_device_id": "system", 00:12:35.272 "dma_device_type": 1 00:12:35.272 }, 00:12:35.272 { 00:12:35.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.272 "dma_device_type": 2 00:12:35.272 } 00:12:35.272 ], 00:12:35.272 "driver_specific": {} 00:12:35.272 } 00:12:35.272 ] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.272 Null_1 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.272 [ 00:12:35.272 { 00:12:35.272 "name": "Null_1", 00:12:35.272 "aliases": [ 00:12:35.272 "e2513d5f-b3d4-4634-a7c1-072b8a8777ef" 00:12:35.272 ], 00:12:35.272 "product_name": "Null disk", 00:12:35.272 "block_size": 512, 00:12:35.272 "num_blocks": 262144, 00:12:35.272 "uuid": "e2513d5f-b3d4-4634-a7c1-072b8a8777ef", 00:12:35.272 "assigned_rate_limits": { 00:12:35.272 "rw_ios_per_sec": 0, 00:12:35.272 "rw_mbytes_per_sec": 0, 00:12:35.272 "r_mbytes_per_sec": 0, 00:12:35.272 "w_mbytes_per_sec": 0 00:12:35.272 }, 00:12:35.272 "claimed": false, 00:12:35.272 "zoned": false, 00:12:35.272 "supported_io_types": { 00:12:35.272 "read": true, 00:12:35.272 "write": true, 00:12:35.272 "unmap": false, 00:12:35.272 "flush": false, 00:12:35.272 "reset": true, 00:12:35.272 "nvme_admin": false, 00:12:35.272 "nvme_io": false, 00:12:35.272 "nvme_io_md": false, 00:12:35.272 "write_zeroes": true, 00:12:35.272 "zcopy": false, 00:12:35.272 "get_zone_info": false, 00:12:35.272 "zone_management": false, 00:12:35.272 "zone_append": false, 00:12:35.272 "compare": false, 00:12:35.272 "compare_and_write": false, 00:12:35.272 "abort": true, 00:12:35.272 "seek_hole": false, 00:12:35.272 "seek_data": false, 00:12:35.272 "copy": false, 00:12:35.272 "nvme_iov_md": false 00:12:35.272 }, 00:12:35.272 "driver_specific": {} 00:12:35.272 } 00:12:35.272 ] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:35.272 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:35.273 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:35.273 16:29:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:35.530 Running I/O for 60 seconds... 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 61378.56 245514.26 0.00 0.00 246784.00 0.00 0.00 ' 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=61378.56 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 61378 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=61378 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=15000 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 15000 -gt 1000 ']' 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 15000 Malloc_0 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 15000 IOPS Malloc_0 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.794 16:29:37 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:40.794 ************************************ 00:12:40.794 START TEST bdev_qos_iops 00:12:40.794 ************************************ 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 15000 IOPS Malloc_0 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=15000 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:40.794 16:29:37 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 15000.20 60000.80 0.00 0.00 61140.00 0.00 0.00 ' 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=15000.20 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 15000 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=15000 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=13500 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=16500 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15000 -lt 13500 ']' 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15000 -gt 16500 ']' 00:12:46.111 00:12:46.111 real 0m5.234s 00:12:46.111 user 0m0.106s 00:12:46.111 sys 0m0.046s 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.111 16:29:42 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:12:46.111 ************************************ 00:12:46.111 END TEST bdev_qos_iops 00:12:46.111 ************************************ 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:46.111 16:29:42 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 21897.24 87588.96 0.00 0.00 89088.00 0.00 0.00 ' 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=89088.00 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 89088 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=89088 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.425 16:29:47 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:51.425 ************************************ 00:12:51.425 START TEST bdev_qos_bw 00:12:51.425 ************************************ 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 8 BANDWIDTH Null_1 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:51.425 16:29:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2046.73 8186.90 0.00 0.00 8436.00 0.00 0.00 ' 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8436.00 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8436 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8436 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8436 -lt 7372 ']' 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8436 -gt 9011 ']' 00:12:56.713 00:12:56.713 real 0m5.283s 00:12:56.713 user 0m0.105s 00:12:56.713 sys 0m0.044s 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 ************************************ 00:12:56.713 END TEST bdev_qos_bw 00:12:56.713 ************************************ 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.713 16:29:53 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 ************************************ 00:12:56.713 START TEST bdev_qos_ro_bw 00:12:56.713 ************************************ 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:56.713 16:29:53 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 512.30 2049.21 0.00 0.00 2056.00 0.00 0.00 ' 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2056.00 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2056 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2056 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2056 -lt 1843 ']' 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2056 -gt 2252 ']' 00:13:01.981 00:13:01.981 real 0m5.166s 00:13:01.981 user 0m0.103s 00:13:01.981 sys 0m0.046s 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.981 16:29:58 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:01.981 ************************************ 00:13:01.981 END TEST bdev_qos_ro_bw 00:13:01.981 ************************************ 00:13:01.981 16:29:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:01.981 16:29:58 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.981 16:29:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.549 00:13:02.549 Latency(us) 00:13:02.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.549 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:02.549 Malloc_0 : 26.73 20653.03 80.68 0.00 0.00 12274.66 2188.90 503316.48 00:13:02.549 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:02.549 Null_1 : 27.01 20900.79 81.64 0.00 0.00 12214.48 786.43 278501.79 00:13:02.549 =================================================================================================================== 00:13:02.549 Total : 41553.82 162.32 0.00 0.00 12244.24 786.43 503316.48 00:13:02.549 0 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 1578488 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 1578488 ']' 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 1578488 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1578488 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1578488' 00:13:02.549 killing process with pid 1578488 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 1578488 00:13:02.549 Received shutdown signal, test time was about 27.075953 seconds 00:13:02.549 00:13:02.549 Latency(us) 00:13:02.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.549 =================================================================================================================== 00:13:02.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:02.549 16:29:59 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 1578488 00:13:04.452 16:30:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:13:04.452 00:13:04.452 real 0m30.294s 00:13:04.452 user 0m30.761s 00:13:04.452 sys 0m0.951s 00:13:04.452 16:30:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.452 16:30:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:04.452 ************************************ 00:13:04.452 END TEST bdev_qos 00:13:04.452 ************************************ 00:13:04.452 16:30:01 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:04.452 16:30:01 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:04.452 16:30:01 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.452 16:30:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:04.452 ************************************ 00:13:04.452 START TEST bdev_qd_sampling 00:13:04.452 ************************************ 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=1583795 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 1583795' 00:13:04.452 Process bdev QD sampling period testing pid: 1583795 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 1583795 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 1583795 ']' 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.452 16:30:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:04.452 [2024-07-24 16:30:01.162288] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:04.452 [2024-07-24 16:30:01.162415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583795 ] 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:04.452 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.452 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:04.453 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:04.453 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:04.711 [2024-07-24 16:30:01.385430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:04.969 [2024-07-24 16:30:01.666994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.969 [2024-07-24 16:30:01.667000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:05.536 Malloc_QD 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.536 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:05.795 [ 00:13:05.795 { 00:13:05.795 "name": "Malloc_QD", 00:13:05.795 "aliases": [ 00:13:05.796 "c1f8c325-25d2-4dde-92d8-941dd1d7f12b" 00:13:05.796 ], 00:13:05.796 "product_name": "Malloc disk", 00:13:05.796 "block_size": 512, 00:13:05.796 "num_blocks": 262144, 00:13:05.796 "uuid": "c1f8c325-25d2-4dde-92d8-941dd1d7f12b", 00:13:05.796 "assigned_rate_limits": { 00:13:05.796 "rw_ios_per_sec": 0, 00:13:05.796 "rw_mbytes_per_sec": 0, 00:13:05.796 "r_mbytes_per_sec": 0, 00:13:05.796 "w_mbytes_per_sec": 0 00:13:05.796 }, 00:13:05.796 "claimed": false, 00:13:05.796 "zoned": false, 00:13:05.796 "supported_io_types": { 00:13:05.796 "read": true, 00:13:05.796 "write": true, 00:13:05.796 "unmap": true, 00:13:05.796 "flush": true, 00:13:05.796 "reset": true, 00:13:05.796 "nvme_admin": false, 00:13:05.796 "nvme_io": false, 00:13:05.796 "nvme_io_md": false, 00:13:05.796 "write_zeroes": true, 00:13:05.796 "zcopy": true, 00:13:05.796 "get_zone_info": false, 00:13:05.796 "zone_management": false, 00:13:05.796 "zone_append": false, 00:13:05.796 "compare": false, 00:13:05.796 "compare_and_write": false, 00:13:05.796 "abort": true, 00:13:05.796 "seek_hole": false, 00:13:05.796 "seek_data": false, 00:13:05.796 "copy": true, 00:13:05.796 "nvme_iov_md": false 00:13:05.796 }, 00:13:05.796 "memory_domains": [ 00:13:05.796 { 00:13:05.796 "dma_device_id": "system", 00:13:05.796 "dma_device_type": 1 00:13:05.796 }, 00:13:05.796 { 00:13:05.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.796 "dma_device_type": 2 00:13:05.796 } 00:13:05.796 ], 00:13:05.796 "driver_specific": {} 00:13:05.796 } 00:13:05.796 ] 00:13:05.796 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.796 16:30:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:13:05.796 16:30:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:13:05.796 16:30:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.796 Running I/O for 5 seconds... 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:13:07.697 "tick_rate": 2500000000, 00:13:07.697 "ticks": 14078292772332926, 00:13:07.697 "bdevs": [ 00:13:07.697 { 00:13:07.697 "name": "Malloc_QD", 00:13:07.697 "bytes_read": 700494336, 00:13:07.697 "num_read_ops": 171012, 00:13:07.697 "bytes_written": 0, 00:13:07.697 "num_write_ops": 0, 00:13:07.697 "bytes_unmapped": 0, 00:13:07.697 "num_unmap_ops": 0, 00:13:07.697 "bytes_copied": 0, 00:13:07.697 "num_copy_ops": 0, 00:13:07.697 "read_latency_ticks": 2279992412536, 00:13:07.697 "max_read_latency_ticks": 14373704, 00:13:07.697 "min_read_latency_ticks": 477186, 00:13:07.697 "write_latency_ticks": 0, 00:13:07.697 "max_write_latency_ticks": 0, 00:13:07.697 "min_write_latency_ticks": 0, 00:13:07.697 "unmap_latency_ticks": 0, 00:13:07.697 "max_unmap_latency_ticks": 0, 00:13:07.697 "min_unmap_latency_ticks": 0, 00:13:07.697 "copy_latency_ticks": 0, 00:13:07.697 "max_copy_latency_ticks": 0, 00:13:07.697 "min_copy_latency_ticks": 0, 00:13:07.697 "io_error": {}, 00:13:07.697 "queue_depth_polling_period": 10, 00:13:07.697 "queue_depth": 512, 00:13:07.697 "io_time": 20, 00:13:07.697 "weighted_io_time": 10240 00:13:07.697 } 00:13:07.697 ] 00:13:07.697 }' 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.697 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:07.697 00:13:07.697 Latency(us) 00:13:07.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.697 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:07.697 Malloc_QD : 1.85 47765.35 186.58 0.00 0.00 5344.94 1304.17 5767.17 00:13:07.697 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:07.697 Malloc_QD : 1.85 48115.86 187.95 0.00 0.00 5306.49 969.93 5531.24 00:13:07.697 =================================================================================================================== 00:13:07.697 Total : 95881.21 374.54 0.00 0.00 5325.63 969.93 5767.17 00:13:07.985 0 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 1583795 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 1583795 ']' 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 1583795 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583795 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583795' 00:13:07.985 killing process with pid 1583795 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 1583795 00:13:07.985 Received shutdown signal, test time was about 2.075022 seconds 00:13:07.985 00:13:07.985 Latency(us) 00:13:07.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.985 =================================================================================================================== 00:13:07.985 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.985 16:30:04 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 1583795 00:13:09.890 16:30:06 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:13:09.890 00:13:09.890 real 0m5.398s 00:13:09.890 user 0m9.776s 00:13:09.890 sys 0m0.553s 00:13:09.890 16:30:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.890 16:30:06 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 ************************************ 00:13:09.890 END TEST bdev_qd_sampling 00:13:09.890 ************************************ 00:13:09.890 16:30:06 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:13:09.890 16:30:06 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.890 16:30:06 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.890 16:30:06 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 ************************************ 00:13:09.891 START TEST bdev_error 00:13:09.891 ************************************ 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=1584801 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 1584801' 00:13:09.891 Process error testing pid: 1584801 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:09.891 16:30:06 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 1584801 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 1584801 ']' 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.891 16:30:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:09.891 [2024-07-24 16:30:06.662619] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:09.891 [2024-07-24 16:30:06.662743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1584801 ] 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.150 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:10.150 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.151 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:10.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.151 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:10.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.151 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:10.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.151 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:10.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:10.151 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:10.151 [2024-07-24 16:30:06.876855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.409 [2024-07-24 16:30:07.142434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.976 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.976 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:13:10.976 16:30:07 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:10.976 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.976 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.236 Dev_1 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.236 16:30:07 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.236 [ 00:13:11.236 { 00:13:11.236 "name": "Dev_1", 00:13:11.236 "aliases": [ 00:13:11.236 "bf40b612-658e-402a-b310-46b5f4310c5f" 00:13:11.236 ], 00:13:11.236 "product_name": "Malloc disk", 00:13:11.236 "block_size": 512, 00:13:11.236 "num_blocks": 262144, 00:13:11.236 "uuid": "bf40b612-658e-402a-b310-46b5f4310c5f", 00:13:11.236 "assigned_rate_limits": { 00:13:11.236 "rw_ios_per_sec": 0, 00:13:11.236 "rw_mbytes_per_sec": 0, 00:13:11.236 "r_mbytes_per_sec": 0, 00:13:11.236 "w_mbytes_per_sec": 0 00:13:11.236 }, 00:13:11.236 "claimed": false, 00:13:11.236 "zoned": false, 00:13:11.236 "supported_io_types": { 00:13:11.236 "read": true, 00:13:11.236 "write": true, 00:13:11.236 "unmap": true, 00:13:11.236 "flush": true, 00:13:11.236 "reset": true, 00:13:11.236 "nvme_admin": false, 00:13:11.236 "nvme_io": false, 00:13:11.236 "nvme_io_md": false, 00:13:11.236 "write_zeroes": true, 00:13:11.236 "zcopy": true, 00:13:11.236 "get_zone_info": false, 00:13:11.236 "zone_management": false, 00:13:11.236 "zone_append": false, 00:13:11.236 "compare": false, 00:13:11.236 "compare_and_write": false, 00:13:11.236 "abort": true, 00:13:11.236 "seek_hole": false, 00:13:11.236 "seek_data": false, 00:13:11.236 "copy": true, 00:13:11.236 "nvme_iov_md": false 00:13:11.236 }, 00:13:11.236 "memory_domains": [ 00:13:11.236 { 00:13:11.236 "dma_device_id": "system", 00:13:11.236 "dma_device_type": 1 00:13:11.236 }, 00:13:11.236 { 00:13:11.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.236 "dma_device_type": 2 00:13:11.236 } 00:13:11.236 ], 00:13:11.236 "driver_specific": {} 00:13:11.236 } 00:13:11.236 ] 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:11.236 16:30:07 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.236 true 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.236 16:30:07 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.236 16:30:07 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.495 Dev_2 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 16:30:08 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 [ 00:13:11.496 { 00:13:11.496 "name": "Dev_2", 00:13:11.496 "aliases": [ 00:13:11.496 "c0eff3b5-aaf6-45ed-8704-faa883d4756f" 00:13:11.496 ], 00:13:11.496 "product_name": "Malloc disk", 00:13:11.496 "block_size": 512, 00:13:11.496 "num_blocks": 262144, 00:13:11.496 "uuid": "c0eff3b5-aaf6-45ed-8704-faa883d4756f", 00:13:11.496 "assigned_rate_limits": { 00:13:11.496 "rw_ios_per_sec": 0, 00:13:11.496 "rw_mbytes_per_sec": 0, 00:13:11.496 "r_mbytes_per_sec": 0, 00:13:11.496 "w_mbytes_per_sec": 0 00:13:11.496 }, 00:13:11.496 "claimed": false, 00:13:11.496 "zoned": false, 00:13:11.496 "supported_io_types": { 00:13:11.496 "read": true, 00:13:11.496 "write": true, 00:13:11.496 "unmap": true, 00:13:11.496 "flush": true, 00:13:11.496 "reset": true, 00:13:11.496 "nvme_admin": false, 00:13:11.496 "nvme_io": false, 00:13:11.496 "nvme_io_md": false, 00:13:11.496 "write_zeroes": true, 00:13:11.496 "zcopy": true, 00:13:11.496 "get_zone_info": false, 00:13:11.496 "zone_management": false, 00:13:11.496 "zone_append": false, 00:13:11.496 "compare": false, 00:13:11.496 "compare_and_write": false, 00:13:11.496 "abort": true, 00:13:11.496 "seek_hole": false, 00:13:11.496 "seek_data": false, 00:13:11.496 "copy": true, 00:13:11.496 "nvme_iov_md": false 00:13:11.496 }, 00:13:11.496 "memory_domains": [ 00:13:11.496 { 00:13:11.496 "dma_device_id": "system", 00:13:11.496 "dma_device_type": 1 00:13:11.496 }, 00:13:11.496 { 00:13:11.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.496 "dma_device_type": 2 00:13:11.496 } 00:13:11.496 ], 00:13:11.496 "driver_specific": {} 00:13:11.496 } 00:13:11.496 ] 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:11.496 16:30:08 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 16:30:08 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 16:30:08 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:13:11.496 16:30:08 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:11.496 Running I/O for 5 seconds... 00:13:12.429 16:30:09 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 1584801 00:13:12.429 16:30:09 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 1584801' 00:13:12.429 Process is existed as continue on error is set. Pid: 1584801 00:13:12.429 16:30:09 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:12.429 16:30:09 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.429 16:30:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:12.429 16:30:09 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.429 16:30:09 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:12.429 16:30:09 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.429 16:30:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:12.429 Timeout while waiting for response: 00:13:12.429 00:13:12.429 00:13:12.687 16:30:09 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.687 16:30:09 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:13:16.872 00:13:16.872 Latency(us) 00:13:16.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.872 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:16.872 EE_Dev_1 : 0.91 36054.88 140.84 5.51 0.00 440.07 142.54 727.45 00:13:16.872 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:16.872 Dev_2 : 5.00 75380.48 294.46 0.00 0.00 208.79 70.04 157705.83 00:13:16.872 =================================================================================================================== 00:13:16.872 Total : 111435.36 435.29 5.51 0.00 227.25 70.04 157705.83 00:13:17.816 16:30:14 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 1584801 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 1584801 ']' 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 1584801 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1584801 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1584801' 00:13:17.816 killing process with pid 1584801 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 1584801 00:13:17.816 Received shutdown signal, test time was about 5.000000 seconds 00:13:17.816 00:13:17.816 Latency(us) 00:13:17.816 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.816 =================================================================================================================== 00:13:17.816 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.816 16:30:14 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 1584801 00:13:20.348 16:30:16 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=1586882 00:13:20.348 16:30:16 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 1586882' 00:13:20.348 Process error testing pid: 1586882 00:13:20.348 16:30:16 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 1586882 00:13:20.348 16:30:16 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 1586882 ']' 00:13:20.348 16:30:16 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.348 16:30:16 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.348 16:30:16 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.348 16:30:16 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.348 16:30:16 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.348 16:30:16 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:20.348 [2024-07-24 16:30:16.722023] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:20.348 [2024-07-24 16:30:16.722157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1586882 ] 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:20.348 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:20.348 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:20.348 [2024-07-24 16:30:16.934118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.607 [2024-07-24 16:30:17.211914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.866 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.866 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:13:20.866 16:30:17 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:20.866 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.866 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 Dev_1 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.125 16:30:17 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 [ 00:13:21.125 { 00:13:21.125 "name": "Dev_1", 00:13:21.125 "aliases": [ 00:13:21.125 "69c2000d-d59c-45b8-ae19-a7374559d1ff" 00:13:21.125 ], 00:13:21.125 "product_name": "Malloc disk", 00:13:21.125 "block_size": 512, 00:13:21.125 "num_blocks": 262144, 00:13:21.125 "uuid": "69c2000d-d59c-45b8-ae19-a7374559d1ff", 00:13:21.125 "assigned_rate_limits": { 00:13:21.125 "rw_ios_per_sec": 0, 00:13:21.125 "rw_mbytes_per_sec": 0, 00:13:21.125 "r_mbytes_per_sec": 0, 00:13:21.125 "w_mbytes_per_sec": 0 00:13:21.125 }, 00:13:21.125 "claimed": false, 00:13:21.125 "zoned": false, 00:13:21.125 "supported_io_types": { 00:13:21.125 "read": true, 00:13:21.125 "write": true, 00:13:21.125 "unmap": true, 00:13:21.125 "flush": true, 00:13:21.125 "reset": true, 00:13:21.125 "nvme_admin": false, 00:13:21.125 "nvme_io": false, 00:13:21.125 "nvme_io_md": false, 00:13:21.125 "write_zeroes": true, 00:13:21.125 "zcopy": true, 00:13:21.125 "get_zone_info": false, 00:13:21.125 "zone_management": false, 00:13:21.125 "zone_append": false, 00:13:21.125 "compare": false, 00:13:21.125 "compare_and_write": false, 00:13:21.125 "abort": true, 00:13:21.125 "seek_hole": false, 00:13:21.125 "seek_data": false, 00:13:21.125 "copy": true, 00:13:21.125 "nvme_iov_md": false 00:13:21.125 }, 00:13:21.125 "memory_domains": [ 00:13:21.125 { 00:13:21.125 "dma_device_id": "system", 00:13:21.125 "dma_device_type": 1 00:13:21.125 }, 00:13:21.125 { 00:13:21.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.125 "dma_device_type": 2 00:13:21.125 } 00:13:21.125 ], 00:13:21.125 "driver_specific": {} 00:13:21.125 } 00:13:21.125 ] 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:21.125 16:30:17 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.125 true 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.125 16:30:17 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.125 16:30:17 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.384 Dev_2 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.384 16:30:18 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.384 [ 00:13:21.384 { 00:13:21.384 "name": "Dev_2", 00:13:21.384 "aliases": [ 00:13:21.384 "fc29e5f7-8dc7-4a46-a224-67c12eaa3a4e" 00:13:21.384 ], 00:13:21.384 "product_name": "Malloc disk", 00:13:21.384 "block_size": 512, 00:13:21.384 "num_blocks": 262144, 00:13:21.384 "uuid": "fc29e5f7-8dc7-4a46-a224-67c12eaa3a4e", 00:13:21.384 "assigned_rate_limits": { 00:13:21.384 "rw_ios_per_sec": 0, 00:13:21.384 "rw_mbytes_per_sec": 0, 00:13:21.384 "r_mbytes_per_sec": 0, 00:13:21.384 "w_mbytes_per_sec": 0 00:13:21.384 }, 00:13:21.384 "claimed": false, 00:13:21.384 "zoned": false, 00:13:21.384 "supported_io_types": { 00:13:21.384 "read": true, 00:13:21.384 "write": true, 00:13:21.384 "unmap": true, 00:13:21.384 "flush": true, 00:13:21.384 "reset": true, 00:13:21.384 "nvme_admin": false, 00:13:21.384 "nvme_io": false, 00:13:21.384 "nvme_io_md": false, 00:13:21.384 "write_zeroes": true, 00:13:21.384 "zcopy": true, 00:13:21.384 "get_zone_info": false, 00:13:21.384 "zone_management": false, 00:13:21.384 "zone_append": false, 00:13:21.384 "compare": false, 00:13:21.384 "compare_and_write": false, 00:13:21.384 "abort": true, 00:13:21.384 "seek_hole": false, 00:13:21.384 "seek_data": false, 00:13:21.384 "copy": true, 00:13:21.384 "nvme_iov_md": false 00:13:21.384 }, 00:13:21.384 "memory_domains": [ 00:13:21.384 { 00:13:21.384 "dma_device_id": "system", 00:13:21.384 "dma_device_type": 1 00:13:21.384 }, 00:13:21.384 { 00:13:21.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.384 "dma_device_type": 2 00:13:21.384 } 00:13:21.384 ], 00:13:21.384 "driver_specific": {} 00:13:21.384 } 00:13:21.384 ] 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:21.384 16:30:18 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.384 16:30:18 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 1586882 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1586882 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:13:21.384 16:30:18 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:21.384 16:30:18 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 1586882 00:13:21.643 Running I/O for 5 seconds... 00:13:21.643 task offset: 175064 on job bdev=EE_Dev_1 fails 00:13:21.643 00:13:21.643 Latency(us) 00:13:21.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.643 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:21.643 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:21.643 EE_Dev_1 : 0.00 27500.00 107.42 6250.00 0.00 387.41 140.90 694.68 00:13:21.643 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:21.643 Dev_2 : 0.00 17250.67 67.39 0.00 0.00 686.13 140.08 1271.40 00:13:21.643 =================================================================================================================== 00:13:21.643 Total : 44750.67 174.81 6250.00 0.00 549.43 140.08 1271.40 00:13:21.643 [2024-07-24 16:30:18.281160] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:21.643 request: 00:13:21.643 { 00:13:21.643 "method": "perform_tests", 00:13:21.643 "req_id": 1 00:13:21.643 } 00:13:21.643 Got JSON-RPC error response 00:13:21.643 response: 00:13:21.643 { 00:13:21.643 "code": -32603, 00:13:21.643 "message": "bdevperf failed with error Operation not permitted" 00:13:21.643 } 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:24.176 00:13:24.176 real 0m13.954s 00:13:24.176 user 0m13.882s 00:13:24.176 sys 0m1.102s 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.176 16:30:20 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:24.176 ************************************ 00:13:24.176 END TEST bdev_error 00:13:24.176 ************************************ 00:13:24.176 16:30:20 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:13:24.176 16:30:20 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.176 16:30:20 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.176 16:30:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:24.176 ************************************ 00:13:24.176 START TEST bdev_stat 00:13:24.176 ************************************ 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=1587617 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 1587617' 00:13:24.176 Process Bdev IO statistics testing pid: 1587617 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 1587617 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 1587617 ']' 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.176 16:30:20 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:24.176 [2024-07-24 16:30:20.695146] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:24.176 [2024-07-24 16:30:20.695271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587617 ] 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.176 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:24.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:24.177 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:24.177 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:24.177 [2024-07-24 16:30:20.919928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:24.435 [2024-07-24 16:30:21.188375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.435 [2024-07-24 16:30:21.188381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.003 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.003 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:13:25.003 16:30:21 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:25.003 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.003 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 Malloc_STAT 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.262 [ 00:13:25.262 { 00:13:25.262 "name": "Malloc_STAT", 00:13:25.262 "aliases": [ 00:13:25.262 "d17efdce-1010-4bec-82b3-a80e248f1358" 00:13:25.262 ], 00:13:25.262 "product_name": "Malloc disk", 00:13:25.262 "block_size": 512, 00:13:25.262 "num_blocks": 262144, 00:13:25.262 "uuid": "d17efdce-1010-4bec-82b3-a80e248f1358", 00:13:25.262 "assigned_rate_limits": { 00:13:25.262 "rw_ios_per_sec": 0, 00:13:25.262 "rw_mbytes_per_sec": 0, 00:13:25.262 "r_mbytes_per_sec": 0, 00:13:25.262 "w_mbytes_per_sec": 0 00:13:25.262 }, 00:13:25.262 "claimed": false, 00:13:25.262 "zoned": false, 00:13:25.262 "supported_io_types": { 00:13:25.262 "read": true, 00:13:25.262 "write": true, 00:13:25.262 "unmap": true, 00:13:25.262 "flush": true, 00:13:25.262 "reset": true, 00:13:25.262 "nvme_admin": false, 00:13:25.262 "nvme_io": false, 00:13:25.262 "nvme_io_md": false, 00:13:25.262 "write_zeroes": true, 00:13:25.262 "zcopy": true, 00:13:25.262 "get_zone_info": false, 00:13:25.262 "zone_management": false, 00:13:25.262 "zone_append": false, 00:13:25.262 "compare": false, 00:13:25.262 "compare_and_write": false, 00:13:25.262 "abort": true, 00:13:25.262 "seek_hole": false, 00:13:25.262 "seek_data": false, 00:13:25.262 "copy": true, 00:13:25.262 "nvme_iov_md": false 00:13:25.262 }, 00:13:25.262 "memory_domains": [ 00:13:25.262 { 00:13:25.262 "dma_device_id": "system", 00:13:25.262 "dma_device_type": 1 00:13:25.262 }, 00:13:25.262 { 00:13:25.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.262 "dma_device_type": 2 00:13:25.262 } 00:13:25.262 ], 00:13:25.262 "driver_specific": {} 00:13:25.262 } 00:13:25.262 ] 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:13:25.262 16:30:21 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:25.262 Running I/O for 10 seconds... 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:13:27.237 "tick_rate": 2500000000, 00:13:27.237 "ticks": 14078341375476518, 00:13:27.237 "bdevs": [ 00:13:27.237 { 00:13:27.237 "name": "Malloc_STAT", 00:13:27.237 "bytes_read": 741388800, 00:13:27.237 "num_read_ops": 180996, 00:13:27.237 "bytes_written": 0, 00:13:27.237 "num_write_ops": 0, 00:13:27.237 "bytes_unmapped": 0, 00:13:27.237 "num_unmap_ops": 0, 00:13:27.237 "bytes_copied": 0, 00:13:27.237 "num_copy_ops": 0, 00:13:27.237 "read_latency_ticks": 2418494529728, 00:13:27.237 "max_read_latency_ticks": 14123772, 00:13:27.237 "min_read_latency_ticks": 504042, 00:13:27.237 "write_latency_ticks": 0, 00:13:27.237 "max_write_latency_ticks": 0, 00:13:27.237 "min_write_latency_ticks": 0, 00:13:27.237 "unmap_latency_ticks": 0, 00:13:27.237 "max_unmap_latency_ticks": 0, 00:13:27.237 "min_unmap_latency_ticks": 0, 00:13:27.237 "copy_latency_ticks": 0, 00:13:27.237 "max_copy_latency_ticks": 0, 00:13:27.237 "min_copy_latency_ticks": 0, 00:13:27.237 "io_error": {} 00:13:27.237 } 00:13:27.237 ] 00:13:27.237 }' 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=180996 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.237 16:30:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:13:27.237 "tick_rate": 2500000000, 00:13:27.237 "ticks": 14078341538655188, 00:13:27.237 "name": "Malloc_STAT", 00:13:27.237 "channels": [ 00:13:27.237 { 00:13:27.237 "thread_id": 2, 00:13:27.237 "bytes_read": 381681664, 00:13:27.237 "num_read_ops": 93184, 00:13:27.237 "bytes_written": 0, 00:13:27.237 "num_write_ops": 0, 00:13:27.237 "bytes_unmapped": 0, 00:13:27.237 "num_unmap_ops": 0, 00:13:27.237 "bytes_copied": 0, 00:13:27.237 "num_copy_ops": 0, 00:13:27.237 "read_latency_ticks": 1249708222696, 00:13:27.237 "max_read_latency_ticks": 14513578, 00:13:27.237 "min_read_latency_ticks": 10151206, 00:13:27.237 "write_latency_ticks": 0, 00:13:27.237 "max_write_latency_ticks": 0, 00:13:27.237 "min_write_latency_ticks": 0, 00:13:27.237 "unmap_latency_ticks": 0, 00:13:27.237 "max_unmap_latency_ticks": 0, 00:13:27.237 "min_unmap_latency_ticks": 0, 00:13:27.237 "copy_latency_ticks": 0, 00:13:27.237 "max_copy_latency_ticks": 0, 00:13:27.237 "min_copy_latency_ticks": 0 00:13:27.237 }, 00:13:27.237 { 00:13:27.237 "thread_id": 3, 00:13:27.237 "bytes_read": 384827392, 00:13:27.237 "num_read_ops": 93952, 00:13:27.237 "bytes_written": 0, 00:13:27.237 "num_write_ops": 0, 00:13:27.237 "bytes_unmapped": 0, 00:13:27.237 "num_unmap_ops": 0, 00:13:27.237 "bytes_copied": 0, 00:13:27.237 "num_copy_ops": 0, 00:13:27.237 "read_latency_ticks": 1251328720226, 00:13:27.237 "max_read_latency_ticks": 13789240, 00:13:27.237 "min_read_latency_ticks": 10172682, 00:13:27.237 "write_latency_ticks": 0, 00:13:27.237 "max_write_latency_ticks": 0, 00:13:27.237 "min_write_latency_ticks": 0, 00:13:27.237 "unmap_latency_ticks": 0, 00:13:27.237 "max_unmap_latency_ticks": 0, 00:13:27.237 "min_unmap_latency_ticks": 0, 00:13:27.237 "copy_latency_ticks": 0, 00:13:27.237 "max_copy_latency_ticks": 0, 00:13:27.237 "min_copy_latency_ticks": 0 00:13:27.237 } 00:13:27.237 ] 00:13:27.237 }' 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=93184 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=93184 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=93952 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=187136 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.237 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:13:27.237 "tick_rate": 2500000000, 00:13:27.237 "ticks": 14078341807203778, 00:13:27.237 "bdevs": [ 00:13:27.237 { 00:13:27.237 "name": "Malloc_STAT", 00:13:27.237 "bytes_read": 808497664, 00:13:27.237 "num_read_ops": 197380, 00:13:27.237 "bytes_written": 0, 00:13:27.237 "num_write_ops": 0, 00:13:27.237 "bytes_unmapped": 0, 00:13:27.237 "num_unmap_ops": 0, 00:13:27.237 "bytes_copied": 0, 00:13:27.237 "num_copy_ops": 0, 00:13:27.237 "read_latency_ticks": 2637691845836, 00:13:27.237 "max_read_latency_ticks": 14760808, 00:13:27.237 "min_read_latency_ticks": 504042, 00:13:27.237 "write_latency_ticks": 0, 00:13:27.237 "max_write_latency_ticks": 0, 00:13:27.237 "min_write_latency_ticks": 0, 00:13:27.237 "unmap_latency_ticks": 0, 00:13:27.237 "max_unmap_latency_ticks": 0, 00:13:27.237 "min_unmap_latency_ticks": 0, 00:13:27.237 "copy_latency_ticks": 0, 00:13:27.237 "max_copy_latency_ticks": 0, 00:13:27.237 "min_copy_latency_ticks": 0, 00:13:27.237 "io_error": {} 00:13:27.237 } 00:13:27.237 ] 00:13:27.237 }' 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=197380 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 187136 -lt 180996 ']' 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 187136 -gt 197380 ']' 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:27.496 00:13:27.496 Latency(us) 00:13:27.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.496 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:27.496 Malloc_STAT : 2.13 47619.45 186.01 0.00 0.00 5361.37 1821.90 5924.45 00:13:27.496 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:27.496 Malloc_STAT : 2.13 47977.52 187.41 0.00 0.00 5321.37 1795.69 5531.24 00:13:27.496 =================================================================================================================== 00:13:27.496 Total : 95596.97 373.43 0.00 0.00 5341.29 1795.69 5924.45 00:13:27.496 0 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 1587617 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 1587617 ']' 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 1587617 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.496 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1587617 00:13:27.755 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.755 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.755 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1587617' 00:13:27.755 killing process with pid 1587617 00:13:27.755 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 1587617 00:13:27.755 Received shutdown signal, test time was about 2.361556 seconds 00:13:27.755 00:13:27.755 Latency(us) 00:13:27.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.755 =================================================================================================================== 00:13:27.755 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.755 16:30:24 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 1587617 00:13:29.659 16:30:26 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:13:29.659 00:13:29.659 real 0m5.529s 00:13:29.659 user 0m10.089s 00:13:29.659 sys 0m0.590s 00:13:29.659 16:30:26 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.660 16:30:26 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:29.660 ************************************ 00:13:29.660 END TEST bdev_stat 00:13:29.660 ************************************ 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:29.660 16:30:26 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:29.660 00:13:29.660 real 2m45.724s 00:13:29.660 user 8m30.010s 00:13:29.660 sys 0m26.160s 00:13:29.660 16:30:26 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.660 16:30:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:29.660 ************************************ 00:13:29.660 END TEST blockdev_general 00:13:29.660 ************************************ 00:13:29.660 16:30:26 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:29.660 16:30:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:29.660 16:30:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.660 16:30:26 -- common/autotest_common.sh@10 -- # set +x 00:13:29.660 ************************************ 00:13:29.660 START TEST bdev_raid 00:13:29.660 ************************************ 00:13:29.660 16:30:26 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:29.660 * Looking for test storage... 00:13:29.660 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:13:29.660 16:30:26 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:13:29.660 16:30:26 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:29.660 16:30:26 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:29.660 16:30:26 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:13:29.660 16:30:26 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:13:29.660 16:30:26 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:13:29.660 16:30:26 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:13:29.660 16:30:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.660 16:30:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.660 16:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.660 ************************************ 00:13:29.660 START TEST raid0_resize_superblock_test 00:13:29.660 ************************************ 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=1588740 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 1588740' 00:13:29.660 Process raid pid: 1588740 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 1588740 /var/tmp/spdk-raid.sock 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1588740 ']' 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:29.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.660 16:30:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.919 [2024-07-24 16:30:26.631832] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:29.919 [2024-07-24 16:30:26.632083] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:30.178 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.178 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:30.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.179 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:30.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.179 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:30.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.179 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:30.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.179 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:30.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.179 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:30.179 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:30.179 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:30.179 [2024-07-24 16:30:27.012338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.746 [2024-07-24 16:30:27.304101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.005 [2024-07-24 16:30:27.664542] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.005 [2024-07-24 16:30:27.664579] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.005 16:30:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.005 16:30:27 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:31.005 16:30:27 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:32.379 malloc0 00:13:32.379 16:30:28 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:32.379 [2024-07-24 16:30:29.093336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:32.379 [2024-07-24 16:30:29.093403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.379 [2024-07-24 16:30:29.093437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:32.379 [2024-07-24 16:30:29.093457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.379 [2024-07-24 16:30:29.096225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.379 [2024-07-24 16:30:29.096275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:32.379 pt0 00:13:32.380 16:30:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:32.638 704dc748-f254-4197-ad3d-0c0119569d47 00:13:32.638 16:30:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:32.896 cb926e33-b61c-47f6-9bbe-4109190b6c9c 00:13:32.896 16:30:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:33.155 569c03b4-5cfb-448f-97dd-eb21c679da0e 00:13:33.155 16:30:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:33.155 16:30:29 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:33.414 [2024-07-24 16:30:30.092356] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev cb926e33-b61c-47f6-9bbe-4109190b6c9c is claimed 00:13:33.414 [2024-07-24 16:30:30.092503] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev 569c03b4-5cfb-448f-97dd-eb21c679da0e is claimed 00:13:33.414 [2024-07-24 16:30:30.092693] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:13:33.414 [2024-07-24 16:30:30.092715] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:13:33.414 [2024-07-24 16:30:30.093067] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:33.414 [2024-07-24 16:30:30.093354] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:13:33.414 [2024-07-24 16:30:30.093371] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000040880 00:13:33.414 [2024-07-24 16:30:30.093598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.414 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:33.414 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:33.672 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:33.672 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:33.672 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:33.930 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:33.930 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:33.930 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:33.930 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:33.930 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:13:33.930 [2024-07-24 16:30:30.786477] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.189 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:34.189 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:34.189 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:13:34.189 16:30:30 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:34.189 [2024-07-24 16:30:31.015033] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:34.189 [2024-07-24 16:30:31.015070] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cb926e33-b61c-47f6-9bbe-4109190b6c9c' was resized: old size 131072, new size 204800 00:13:34.189 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:34.448 [2024-07-24 16:30:31.239578] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:34.448 [2024-07-24 16:30:31.239613] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '569c03b4-5cfb-448f-97dd-eb21c679da0e' was resized: old size 131072, new size 204800 00:13:34.448 [2024-07-24 16:30:31.239648] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:13:34.448 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:34.448 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:34.707 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:34.707 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:34.707 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:34.965 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:34.965 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:34.965 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:34.965 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:34.965 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:13:35.223 [2024-07-24 16:30:31.929543] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.223 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:35.223 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:35.223 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:13:35.223 16:30:31 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:35.482 [2024-07-24 16:30:32.157910] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:35.482 [2024-07-24 16:30:32.157994] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:35.482 [2024-07-24 16:30:32.158011] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.482 [2024-07-24 16:30:32.158029] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:35.482 [2024-07-24 16:30:32.158149] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.482 [2024-07-24 16:30:32.158194] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.482 [2024-07-24 16:30:32.158220] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name Raid, state offline 00:13:35.482 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:35.740 [2024-07-24 16:30:32.382405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:35.740 [2024-07-24 16:30:32.382468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.740 [2024-07-24 16:30:32.382494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040b80 00:13:35.740 [2024-07-24 16:30:32.382514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.740 [2024-07-24 16:30:32.385313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.741 [2024-07-24 16:30:32.385351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:35.741 pt0 00:13:35.741 [2024-07-24 16:30:32.387547] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cb926e33-b61c-47f6-9bbe-4109190b6c9c 00:13:35.741 [2024-07-24 16:30:32.387636] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev cb926e33-b61c-47f6-9bbe-4109190b6c9c is claimed 00:13:35.741 [2024-07-24 16:30:32.387788] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 569c03b4-5cfb-448f-97dd-eb21c679da0e 00:13:35.741 [2024-07-24 16:30:32.387816] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev 569c03b4-5cfb-448f-97dd-eb21c679da0e is claimed 00:13:35.741 [2024-07-24 16:30:32.387992] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 569c03b4-5cfb-448f-97dd-eb21c679da0e (2) smaller than existing raid bdev Raid (3) 00:13:35.741 [2024-07-24 16:30:32.388034] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:13:35.741 [2024-07-24 16:30:32.388046] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:13:35.741 [2024-07-24 16:30:32.388349] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:13:35.741 [2024-07-24 16:30:32.388613] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:13:35.741 [2024-07-24 16:30:32.388634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000041780 00:13:35.741 [2024-07-24 16:30:32.388825] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.741 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:35.741 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:35.741 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:35.741 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:13:36.000 [2024-07-24 16:30:32.611379] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 1588740 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1588740 ']' 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1588740 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1588740 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1588740' 00:13:36.000 killing process with pid 1588740 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 1588740 00:13:36.000 [2024-07-24 16:30:32.687081] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.000 [2024-07-24 16:30:32.687192] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.000 [2024-07-24 16:30:32.687245] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.000 [2024-07-24 16:30:32.687264] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Raid, state offline 00:13:36.000 16:30:32 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 1588740 00:13:36.568 [2024-07-24 16:30:33.326355] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.470 16:30:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:38.470 00:13:38.470 real 0m8.674s 00:13:38.470 user 0m11.652s 00:13:38.470 sys 0m1.479s 00:13:38.470 16:30:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.470 16:30:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.470 ************************************ 00:13:38.470 END TEST raid0_resize_superblock_test 00:13:38.470 ************************************ 00:13:38.470 16:30:35 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:13:38.470 16:30:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:38.470 16:30:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.470 16:30:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.470 ************************************ 00:13:38.470 START TEST raid1_resize_superblock_test 00:13:38.470 ************************************ 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=1590132 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 1590132' 00:13:38.470 Process raid pid: 1590132 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 1590132 /var/tmp/spdk-raid.sock 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1590132 ']' 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:38.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.470 16:30:35 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.470 [2024-07-24 16:30:35.286258] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:38.470 [2024-07-24 16:30:35.286378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.729 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:38.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:38.730 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.730 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:38.730 [2024-07-24 16:30:35.519028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.989 [2024-07-24 16:30:35.793447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.556 [2024-07-24 16:30:36.145149] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.556 [2024-07-24 16:30:36.145190] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.556 16:30:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.556 16:30:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:39.556 16:30:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:40.934 malloc0 00:13:40.934 16:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:40.934 [2024-07-24 16:30:37.637845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:40.934 [2024-07-24 16:30:37.637912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.934 [2024-07-24 16:30:37.637946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:40.934 [2024-07-24 16:30:37.637965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.934 [2024-07-24 16:30:37.640725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.934 [2024-07-24 16:30:37.640763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:40.934 pt0 00:13:40.934 16:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:41.258 b191c881-d5c7-4bf4-b47f-04b9c20dd68b 00:13:41.258 16:30:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:41.516 c9358aac-79a0-485b-9dd3-74108369e786 00:13:41.516 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:41.775 f9116d73-cdbb-49f7-a85a-5f45214c7bad 00:13:41.775 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:41.775 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:41.775 [2024-07-24 16:30:38.622378] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev c9358aac-79a0-485b-9dd3-74108369e786 is claimed 00:13:41.775 [2024-07-24 16:30:38.622510] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev f9116d73-cdbb-49f7-a85a-5f45214c7bad is claimed 00:13:41.775 [2024-07-24 16:30:38.622702] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:13:41.775 [2024-07-24 16:30:38.622724] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:41.775 [2024-07-24 16:30:38.623063] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:41.775 [2024-07-24 16:30:38.623344] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:13:41.775 [2024-07-24 16:30:38.623364] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000040880 00:13:41.775 [2024-07-24 16:30:38.623575] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.035 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:42.035 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:42.035 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:42.035 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:42.035 16:30:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:42.293 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:42.293 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:42.294 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:42.294 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:42.294 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:42.552 [2024-07-24 16:30:39.308507] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.553 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:42.553 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:42.553 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:13:42.553 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:42.812 [2024-07-24 16:30:39.537062] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:42.812 [2024-07-24 16:30:39.537095] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c9358aac-79a0-485b-9dd3-74108369e786' was resized: old size 131072, new size 204800 00:13:42.812 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:43.071 [2024-07-24 16:30:39.761630] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:43.071 [2024-07-24 16:30:39.761663] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f9116d73-cdbb-49f7-a85a-5f45214c7bad' was resized: old size 131072, new size 204800 00:13:43.071 [2024-07-24 16:30:39.761697] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:43.071 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:43.071 16:30:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:43.330 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:43.330 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:43.330 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:43.588 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:43.588 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:43.588 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:43.588 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:43.588 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:13:43.588 [2024-07-24 16:30:40.439554] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.847 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:43.847 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:43.847 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:13:43.847 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:43.847 [2024-07-24 16:30:40.667904] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:43.847 [2024-07-24 16:30:40.667982] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:43.847 [2024-07-24 16:30:40.668012] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:43.847 [2024-07-24 16:30:40.668196] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.847 [2024-07-24 16:30:40.668426] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.847 [2024-07-24 16:30:40.668506] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.847 [2024-07-24 16:30:40.668531] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name Raid, state offline 00:13:43.847 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:44.106 [2024-07-24 16:30:40.880369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:44.107 [2024-07-24 16:30:40.880427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.107 [2024-07-24 16:30:40.880452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040b80 00:13:44.107 [2024-07-24 16:30:40.880470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.107 [2024-07-24 16:30:40.883228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.107 [2024-07-24 16:30:40.883265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:44.107 [2024-07-24 16:30:40.885426] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c9358aac-79a0-485b-9dd3-74108369e786 00:13:44.107 [2024-07-24 16:30:40.885509] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev c9358aac-79a0-485b-9dd3-74108369e786 is claimed 00:13:44.107 [2024-07-24 16:30:40.885674] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f9116d73-cdbb-49f7-a85a-5f45214c7bad 00:13:44.107 [2024-07-24 16:30:40.885701] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev f9116d73-cdbb-49f7-a85a-5f45214c7bad is claimed 00:13:44.107 pt0 00:13:44.107 [2024-07-24 16:30:40.885899] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f9116d73-cdbb-49f7-a85a-5f45214c7bad (2) smaller than existing raid bdev Raid (3) 00:13:44.107 [2024-07-24 16:30:40.885941] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:13:44.107 [2024-07-24 16:30:40.885952] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:44.107 [2024-07-24 16:30:40.886247] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:13:44.107 [2024-07-24 16:30:40.886510] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:13:44.107 [2024-07-24 16:30:40.886532] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000041780 00:13:44.107 [2024-07-24 16:30:40.886723] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.107 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:44.107 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:44.107 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:44.107 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:13:44.366 [2024-07-24 16:30:41.109370] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.366 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:44.366 16:30:40 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 1590132 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1590132 ']' 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1590132 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1590132 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1590132' 00:13:44.366 killing process with pid 1590132 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 1590132 00:13:44.366 [2024-07-24 16:30:41.187403] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.366 [2024-07-24 16:30:41.187489] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.366 [2024-07-24 16:30:41.187550] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.366 16:30:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 1590132 00:13:44.366 [2024-07-24 16:30:41.187585] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Raid, state offline 00:13:45.303 [2024-07-24 16:30:41.840499] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.213 16:30:43 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:47.213 00:13:47.213 real 0m8.378s 00:13:47.213 user 0m11.424s 00:13:47.213 sys 0m1.388s 00:13:47.213 16:30:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.213 16:30:43 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.213 ************************************ 00:13:47.213 END TEST raid1_resize_superblock_test 00:13:47.213 ************************************ 00:13:47.213 16:30:43 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:13:47.213 16:30:43 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:13:47.213 16:30:43 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:13:47.213 16:30:43 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:13:47.213 16:30:43 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:13:47.213 16:30:43 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:47.213 16:30:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:47.213 16:30:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.213 16:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.213 ************************************ 00:13:47.213 START TEST raid_function_test_raid0 00:13:47.213 ************************************ 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=1591712 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1591712' 00:13:47.213 Process raid pid: 1591712 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 1591712 /var/tmp/spdk-raid.sock 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 1591712 ']' 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:47.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.213 16:30:43 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:47.213 [2024-07-24 16:30:43.757932] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:47.213 [2024-07-24 16:30:43.758052] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.213 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:47.213 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:47.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:47.214 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:47.214 [2024-07-24 16:30:43.988702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.473 [2024-07-24 16:30:44.277073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.041 [2024-07-24 16:30:44.620195] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.041 [2024-07-24 16:30:44.620231] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:48.041 16:30:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:48.301 [2024-07-24 16:30:45.148328] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:48.301 [2024-07-24 16:30:45.150626] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:48.301 [2024-07-24 16:30:45.150698] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:13:48.301 [2024-07-24 16:30:45.150721] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:48.301 [2024-07-24 16:30:45.151054] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:48.301 [2024-07-24 16:30:45.151274] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:13:48.301 [2024-07-24 16:30:45.151290] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x61600003ff80 00:13:48.301 [2024-07-24 16:30:45.151476] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.301 Base_1 00:13:48.301 Base_2 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.560 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:48.819 [2024-07-24 16:30:45.621621] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:48.819 /dev/nbd0 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.819 1+0 records in 00:13:48.819 1+0 records out 00:13:48.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187454 s, 21.9 MB/s 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.819 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.820 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:48.820 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:48.820 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:49.079 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:49.079 { 00:13:49.079 "nbd_device": "/dev/nbd0", 00:13:49.079 "bdev_name": "raid" 00:13:49.079 } 00:13:49.079 ]' 00:13:49.079 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:49.079 { 00:13:49.079 "nbd_device": "/dev/nbd0", 00:13:49.079 "bdev_name": "raid" 00:13:49.079 } 00:13:49.079 ]' 00:13:49.079 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:49.338 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:49.339 16:30:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:49.339 4096+0 records in 00:13:49.339 4096+0 records out 00:13:49.339 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0274579 s, 76.4 MB/s 00:13:49.339 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:49.598 4096+0 records in 00:13:49.598 4096+0 records out 00:13:49.598 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.215945 s, 9.7 MB/s 00:13:49.598 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:49.598 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.598 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:49.598 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.598 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:49.599 128+0 records in 00:13:49.599 128+0 records out 00:13:49.599 65536 bytes (66 kB, 64 KiB) copied, 0.000847897 s, 77.3 MB/s 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:49.599 2035+0 records in 00:13:49.599 2035+0 records out 00:13:49.599 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0113526 s, 91.8 MB/s 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:49.599 456+0 records in 00:13:49.599 456+0 records out 00:13:49.599 233472 bytes (233 kB, 228 KiB) copied, 0.00270188 s, 86.4 MB/s 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.599 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:49.858 [2024-07-24 16:30:46.582853] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.858 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 1591712 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 1591712 ']' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 1591712 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1591712 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1591712' 00:13:50.118 killing process with pid 1591712 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 1591712 00:13:50.118 [2024-07-24 16:30:46.938297] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.118 [2024-07-24 16:30:46.938409] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.118 16:30:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 1591712 00:13:50.118 [2024-07-24 16:30:46.938470] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.118 [2024-07-24 16:30:46.938490] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name raid, state offline 00:13:50.378 [2024-07-24 16:30:47.130064] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.283 16:30:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:13:52.283 00:13:52.283 real 0m5.182s 00:13:52.283 user 0m6.042s 00:13:52.283 sys 0m1.351s 00:13:52.283 16:30:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.283 16:30:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:52.283 ************************************ 00:13:52.283 END TEST raid_function_test_raid0 00:13:52.283 ************************************ 00:13:52.283 16:30:48 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:13:52.283 16:30:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:52.283 16:30:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.283 16:30:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.283 ************************************ 00:13:52.283 START TEST raid_function_test_concat 00:13:52.283 ************************************ 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=1592643 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 1592643' 00:13:52.283 Process raid pid: 1592643 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 1592643 /var/tmp/spdk-raid.sock 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 1592643 ']' 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:52.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.283 16:30:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:52.283 [2024-07-24 16:30:49.018043] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:52.283 [2024-07-24 16:30:49.018165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:52.541 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:52.541 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:52.541 [2024-07-24 16:30:49.243939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.799 [2024-07-24 16:30:49.528811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.057 [2024-07-24 16:30:49.886448] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.057 [2024-07-24 16:30:49.886488] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.314 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:53.314 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:13:53.314 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:13:53.314 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:13:53.314 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:53.314 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:13:53.315 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:53.573 [2024-07-24 16:30:50.418392] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:53.573 [2024-07-24 16:30:50.420664] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:53.573 [2024-07-24 16:30:50.420738] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:13:53.573 [2024-07-24 16:30:50.420757] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:53.573 [2024-07-24 16:30:50.421084] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:53.573 [2024-07-24 16:30:50.421321] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:13:53.573 [2024-07-24 16:30:50.421337] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x61600003ff80 00:13:53.573 [2024-07-24 16:30:50.421520] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.573 Base_1 00:13:53.573 Base_2 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.832 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:54.091 [2024-07-24 16:30:50.895667] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:13:54.091 /dev/nbd0 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.091 1+0 records in 00:13:54.091 1+0 records out 00:13:54.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251019 s, 16.3 MB/s 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:13:54.091 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.351 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.351 16:30:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:54.351 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:54.351 16:30:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:54.351 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:54.351 { 00:13:54.351 "nbd_device": "/dev/nbd0", 00:13:54.351 "bdev_name": "raid" 00:13:54.351 } 00:13:54.351 ]' 00:13:54.351 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:54.351 { 00:13:54.351 "nbd_device": "/dev/nbd0", 00:13:54.351 "bdev_name": "raid" 00:13:54.351 } 00:13:54.351 ]' 00:13:54.351 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:54.610 4096+0 records in 00:13:54.610 4096+0 records out 00:13:54.610 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0272445 s, 77.0 MB/s 00:13:54.610 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:54.869 4096+0 records in 00:13:54.869 4096+0 records out 00:13:54.870 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.208128 s, 10.1 MB/s 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:54.870 128+0 records in 00:13:54.870 128+0 records out 00:13:54.870 65536 bytes (66 kB, 64 KiB) copied, 0.000804607 s, 81.5 MB/s 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:54.870 2035+0 records in 00:13:54.870 2035+0 records out 00:13:54.870 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0105535 s, 98.7 MB/s 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:54.870 456+0 records in 00:13:54.870 456+0 records out 00:13:54.870 233472 bytes (233 kB, 228 KiB) copied, 0.00275814 s, 84.6 MB/s 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.870 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.180 [2024-07-24 16:30:51.859972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.180 16:30:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 1592643 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 1592643 ']' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 1592643 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1592643 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1592643' 00:13:55.455 killing process with pid 1592643 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 1592643 00:13:55.455 [2024-07-24 16:30:52.214336] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.455 [2024-07-24 16:30:52.214454] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.455 16:30:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 1592643 00:13:55.455 [2024-07-24 16:30:52.214517] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.455 [2024-07-24 16:30:52.214538] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name raid, state offline 00:13:55.715 [2024-07-24 16:30:52.418844] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.626 16:30:54 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:13:57.626 00:13:57.626 real 0m5.207s 00:13:57.626 user 0m6.069s 00:13:57.626 sys 0m1.360s 00:13:57.626 16:30:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.626 16:30:54 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:57.626 ************************************ 00:13:57.626 END TEST raid_function_test_concat 00:13:57.626 ************************************ 00:13:57.626 16:30:54 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:13:57.626 16:30:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:57.626 16:30:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.626 16:30:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.626 ************************************ 00:13:57.626 START TEST raid0_resize_test 00:13:57.626 ************************************ 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=1593539 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 1593539' 00:13:57.626 Process raid pid: 1593539 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 1593539 /var/tmp/spdk-raid.sock 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 1593539 ']' 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:57.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.626 16:30:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.626 [2024-07-24 16:30:54.316945] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:13:57.626 [2024-07-24 16:30:54.317063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.626 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:57.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.626 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:57.627 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:57.627 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:57.886 [2024-07-24 16:30:54.547249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.145 [2024-07-24 16:30:54.831736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.403 [2024-07-24 16:30:55.186852] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.403 [2024-07-24 16:30:55.186889] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.662 16:30:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.662 16:30:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:13:58.662 16:30:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:58.921 Base_1 00:13:58.921 16:30:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:59.180 Base_2 00:13:59.180 16:30:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:13:59.180 16:30:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:59.439 [2024-07-24 16:30:56.050971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:59.439 [2024-07-24 16:30:56.053290] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:59.439 [2024-07-24 16:30:56.053361] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:13:59.439 [2024-07-24 16:30:56.053383] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:59.439 [2024-07-24 16:30:56.053746] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000103d0 00:13:59.439 [2024-07-24 16:30:56.053943] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:13:59.439 [2024-07-24 16:30:56.053957] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x61600003ff80 00:13:59.439 [2024-07-24 16:30:56.054188] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.439 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:59.439 [2024-07-24 16:30:56.279518] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:59.439 [2024-07-24 16:30:56.279556] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:59.439 true 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:13:59.698 [2024-07-24 16:30:56.508373] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:13:59.698 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:59.957 [2024-07-24 16:30:56.740790] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:59.957 [2024-07-24 16:30:56.740828] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:59.957 [2024-07-24 16:30:56.740871] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:59.957 true 00:13:59.957 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:59.957 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:14:00.216 [2024-07-24 16:30:56.965581] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 1593539 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 1593539 ']' 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 1593539 00:14:00.216 16:30:56 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1593539 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1593539' 00:14:00.216 killing process with pid 1593539 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 1593539 00:14:00.216 [2024-07-24 16:30:57.056858] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.216 [2024-07-24 16:30:57.056968] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.216 16:30:57 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 1593539 00:14:00.216 [2024-07-24 16:30:57.057028] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.216 [2024-07-24 16:30:57.057044] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Raid, state offline 00:14:00.216 [2024-07-24 16:30:57.071096] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:02.123 16:30:58 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:14:02.123 00:14:02.123 real 0m4.582s 00:14:02.123 user 0m5.849s 00:14:02.123 sys 0m0.847s 00:14:02.123 16:30:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.123 16:30:58 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.123 ************************************ 00:14:02.123 END TEST raid0_resize_test 00:14:02.123 ************************************ 00:14:02.123 16:30:58 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:14:02.123 16:30:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.123 16:30:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.123 16:30:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.123 ************************************ 00:14:02.123 START TEST raid1_resize_test 00:14:02.123 ************************************ 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=1594372 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 1594372' 00:14:02.123 Process raid pid: 1594372 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 1594372 /var/tmp/spdk-raid.sock 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 1594372 ']' 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:02.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.123 16:30:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.383 [2024-07-24 16:30:58.989989] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:02.383 [2024-07-24 16:30:58.990105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:02.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:02.383 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:02.383 [2024-07-24 16:30:59.221265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.952 [2024-07-24 16:30:59.505535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.211 [2024-07-24 16:30:59.821570] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.211 [2024-07-24 16:30:59.821607] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.211 16:30:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.211 16:30:59 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:14:03.211 16:30:59 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:03.469 Base_1 00:14:03.469 16:31:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:03.728 Base_2 00:14:03.728 16:31:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:14:03.728 16:31:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:14:03.986 [2024-07-24 16:31:00.651619] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:03.986 [2024-07-24 16:31:00.653928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:03.986 [2024-07-24 16:31:00.653994] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:03.986 [2024-07-24 16:31:00.654022] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:03.986 [2024-07-24 16:31:00.654364] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000103d0 00:14:03.986 [2024-07-24 16:31:00.654556] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:03.986 [2024-07-24 16:31:00.654571] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x61600003ff80 00:14:03.986 [2024-07-24 16:31:00.654765] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.986 16:31:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:04.245 [2024-07-24 16:31:00.884171] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:04.245 [2024-07-24 16:31:00.884201] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:04.245 true 00:14:04.245 16:31:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:04.245 16:31:00 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:14:04.245 [2024-07-24 16:31:01.060881] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.245 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:14:04.245 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:14:04.245 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:14:04.245 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:14:04.245 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:14:04.245 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:04.504 [2024-07-24 16:31:01.293313] bdev_raid.c:2288:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:04.504 [2024-07-24 16:31:01.293399] bdev_raid.c:2301:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:04.504 [2024-07-24 16:31:01.293434] bdev_raid.c:2315:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:14:04.504 true 00:14:04.504 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:04.504 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:14:04.763 [2024-07-24 16:31:01.518131] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 1594372 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 1594372 ']' 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 1594372 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1594372 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1594372' 00:14:04.763 killing process with pid 1594372 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 1594372 00:14:04.763 [2024-07-24 16:31:01.592097] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.763 [2024-07-24 16:31:01.592216] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.763 16:31:01 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 1594372 00:14:04.763 [2024-07-24 16:31:01.592745] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.763 [2024-07-24 16:31:01.592765] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Raid, state offline 00:14:04.763 [2024-07-24 16:31:01.606040] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.670 16:31:03 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:14:06.670 00:14:06.670 real 0m4.453s 00:14:06.670 user 0m5.704s 00:14:06.670 sys 0m0.790s 00:14:06.670 16:31:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.670 16:31:03 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.670 ************************************ 00:14:06.670 END TEST raid1_resize_test 00:14:06.670 ************************************ 00:14:06.670 16:31:03 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:14:06.670 16:31:03 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:06.670 16:31:03 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:06.671 16:31:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:06.671 16:31:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.671 16:31:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.671 ************************************ 00:14:06.671 START TEST raid_state_function_test 00:14:06.671 ************************************ 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1595182 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1595182' 00:14:06.671 Process raid pid: 1595182 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1595182 /var/tmp/spdk-raid.sock 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1595182 ']' 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.671 16:31:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.671 [2024-07-24 16:31:03.527984] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:06.671 [2024-07-24 16:31:03.528109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:06.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:06.930 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:06.930 [2024-07-24 16:31:03.744492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.189 [2024-07-24 16:31:04.033443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.757 [2024-07-24 16:31:04.391754] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.757 [2024-07-24 16:31:04.391792] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.757 16:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:07.757 16:31:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:07.757 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:08.017 [2024-07-24 16:31:04.740855] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.017 [2024-07-24 16:31:04.740910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.017 [2024-07-24 16:31:04.740925] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.017 [2024-07-24 16:31:04.740942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.017 16:31:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.276 16:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.276 "name": "Existed_Raid", 00:14:08.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.276 "strip_size_kb": 64, 00:14:08.276 "state": "configuring", 00:14:08.276 "raid_level": "raid0", 00:14:08.276 "superblock": false, 00:14:08.276 "num_base_bdevs": 2, 00:14:08.276 "num_base_bdevs_discovered": 0, 00:14:08.276 "num_base_bdevs_operational": 2, 00:14:08.276 "base_bdevs_list": [ 00:14:08.276 { 00:14:08.276 "name": "BaseBdev1", 00:14:08.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.276 "is_configured": false, 00:14:08.276 "data_offset": 0, 00:14:08.276 "data_size": 0 00:14:08.276 }, 00:14:08.276 { 00:14:08.276 "name": "BaseBdev2", 00:14:08.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.276 "is_configured": false, 00:14:08.276 "data_offset": 0, 00:14:08.276 "data_size": 0 00:14:08.276 } 00:14:08.276 ] 00:14:08.276 }' 00:14:08.276 16:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.276 16:31:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.845 16:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.104 [2024-07-24 16:31:05.787561] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.104 [2024-07-24 16:31:05.787603] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:14:09.104 16:31:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:09.364 [2024-07-24 16:31:06.016227] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.364 [2024-07-24 16:31:06.016293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.364 [2024-07-24 16:31:06.016308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.364 [2024-07-24 16:31:06.016325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.364 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.623 [2024-07-24 16:31:06.303007] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.623 BaseBdev1 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:09.623 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.906 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:10.179 [ 00:14:10.179 { 00:14:10.179 "name": "BaseBdev1", 00:14:10.179 "aliases": [ 00:14:10.179 "da623c21-9630-4967-be8d-761d1b4e1d15" 00:14:10.179 ], 00:14:10.179 "product_name": "Malloc disk", 00:14:10.179 "block_size": 512, 00:14:10.179 "num_blocks": 65536, 00:14:10.179 "uuid": "da623c21-9630-4967-be8d-761d1b4e1d15", 00:14:10.179 "assigned_rate_limits": { 00:14:10.179 "rw_ios_per_sec": 0, 00:14:10.179 "rw_mbytes_per_sec": 0, 00:14:10.179 "r_mbytes_per_sec": 0, 00:14:10.179 "w_mbytes_per_sec": 0 00:14:10.179 }, 00:14:10.179 "claimed": true, 00:14:10.179 "claim_type": "exclusive_write", 00:14:10.179 "zoned": false, 00:14:10.179 "supported_io_types": { 00:14:10.179 "read": true, 00:14:10.179 "write": true, 00:14:10.179 "unmap": true, 00:14:10.179 "flush": true, 00:14:10.179 "reset": true, 00:14:10.179 "nvme_admin": false, 00:14:10.179 "nvme_io": false, 00:14:10.179 "nvme_io_md": false, 00:14:10.179 "write_zeroes": true, 00:14:10.179 "zcopy": true, 00:14:10.179 "get_zone_info": false, 00:14:10.179 "zone_management": false, 00:14:10.179 "zone_append": false, 00:14:10.179 "compare": false, 00:14:10.179 "compare_and_write": false, 00:14:10.179 "abort": true, 00:14:10.179 "seek_hole": false, 00:14:10.179 "seek_data": false, 00:14:10.180 "copy": true, 00:14:10.180 "nvme_iov_md": false 00:14:10.180 }, 00:14:10.180 "memory_domains": [ 00:14:10.180 { 00:14:10.180 "dma_device_id": "system", 00:14:10.180 "dma_device_type": 1 00:14:10.180 }, 00:14:10.180 { 00:14:10.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.180 "dma_device_type": 2 00:14:10.180 } 00:14:10.180 ], 00:14:10.180 "driver_specific": {} 00:14:10.180 } 00:14:10.180 ] 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.180 16:31:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.180 16:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.180 "name": "Existed_Raid", 00:14:10.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.180 "strip_size_kb": 64, 00:14:10.180 "state": "configuring", 00:14:10.180 "raid_level": "raid0", 00:14:10.180 "superblock": false, 00:14:10.180 "num_base_bdevs": 2, 00:14:10.180 "num_base_bdevs_discovered": 1, 00:14:10.180 "num_base_bdevs_operational": 2, 00:14:10.180 "base_bdevs_list": [ 00:14:10.180 { 00:14:10.180 "name": "BaseBdev1", 00:14:10.180 "uuid": "da623c21-9630-4967-be8d-761d1b4e1d15", 00:14:10.180 "is_configured": true, 00:14:10.180 "data_offset": 0, 00:14:10.180 "data_size": 65536 00:14:10.180 }, 00:14:10.180 { 00:14:10.180 "name": "BaseBdev2", 00:14:10.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.180 "is_configured": false, 00:14:10.180 "data_offset": 0, 00:14:10.180 "data_size": 0 00:14:10.180 } 00:14:10.180 ] 00:14:10.180 }' 00:14:10.180 16:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.180 16:31:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.748 16:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:11.008 [2024-07-24 16:31:07.755262] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.008 [2024-07-24 16:31:07.755314] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:14:11.008 16:31:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:11.267 [2024-07-24 16:31:07.983942] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.267 [2024-07-24 16:31:07.986233] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.267 [2024-07-24 16:31:07.986275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.267 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.268 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.526 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.526 "name": "Existed_Raid", 00:14:11.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.526 "strip_size_kb": 64, 00:14:11.526 "state": "configuring", 00:14:11.527 "raid_level": "raid0", 00:14:11.527 "superblock": false, 00:14:11.527 "num_base_bdevs": 2, 00:14:11.527 "num_base_bdevs_discovered": 1, 00:14:11.527 "num_base_bdevs_operational": 2, 00:14:11.527 "base_bdevs_list": [ 00:14:11.527 { 00:14:11.527 "name": "BaseBdev1", 00:14:11.527 "uuid": "da623c21-9630-4967-be8d-761d1b4e1d15", 00:14:11.527 "is_configured": true, 00:14:11.527 "data_offset": 0, 00:14:11.527 "data_size": 65536 00:14:11.527 }, 00:14:11.527 { 00:14:11.527 "name": "BaseBdev2", 00:14:11.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.527 "is_configured": false, 00:14:11.527 "data_offset": 0, 00:14:11.527 "data_size": 0 00:14:11.527 } 00:14:11.527 ] 00:14:11.527 }' 00:14:11.527 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.527 16:31:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.095 16:31:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.354 [2024-07-24 16:31:09.059238] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.354 [2024-07-24 16:31:09.059284] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:12.354 [2024-07-24 16:31:09.059298] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:12.354 [2024-07-24 16:31:09.059641] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:12.354 [2024-07-24 16:31:09.059874] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:12.354 [2024-07-24 16:31:09.059892] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:14:12.354 [2024-07-24 16:31:09.060195] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.354 BaseBdev2 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:12.354 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.613 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.872 [ 00:14:12.872 { 00:14:12.872 "name": "BaseBdev2", 00:14:12.872 "aliases": [ 00:14:12.872 "5cfb4cca-0341-4d17-9f64-6a8dde693a22" 00:14:12.872 ], 00:14:12.872 "product_name": "Malloc disk", 00:14:12.872 "block_size": 512, 00:14:12.872 "num_blocks": 65536, 00:14:12.872 "uuid": "5cfb4cca-0341-4d17-9f64-6a8dde693a22", 00:14:12.872 "assigned_rate_limits": { 00:14:12.872 "rw_ios_per_sec": 0, 00:14:12.872 "rw_mbytes_per_sec": 0, 00:14:12.872 "r_mbytes_per_sec": 0, 00:14:12.872 "w_mbytes_per_sec": 0 00:14:12.872 }, 00:14:12.872 "claimed": true, 00:14:12.872 "claim_type": "exclusive_write", 00:14:12.872 "zoned": false, 00:14:12.872 "supported_io_types": { 00:14:12.872 "read": true, 00:14:12.872 "write": true, 00:14:12.872 "unmap": true, 00:14:12.872 "flush": true, 00:14:12.872 "reset": true, 00:14:12.872 "nvme_admin": false, 00:14:12.872 "nvme_io": false, 00:14:12.872 "nvme_io_md": false, 00:14:12.872 "write_zeroes": true, 00:14:12.872 "zcopy": true, 00:14:12.872 "get_zone_info": false, 00:14:12.872 "zone_management": false, 00:14:12.872 "zone_append": false, 00:14:12.872 "compare": false, 00:14:12.872 "compare_and_write": false, 00:14:12.872 "abort": true, 00:14:12.872 "seek_hole": false, 00:14:12.872 "seek_data": false, 00:14:12.872 "copy": true, 00:14:12.872 "nvme_iov_md": false 00:14:12.872 }, 00:14:12.872 "memory_domains": [ 00:14:12.872 { 00:14:12.872 "dma_device_id": "system", 00:14:12.872 "dma_device_type": 1 00:14:12.872 }, 00:14:12.872 { 00:14:12.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.872 "dma_device_type": 2 00:14:12.872 } 00:14:12.872 ], 00:14:12.872 "driver_specific": {} 00:14:12.872 } 00:14:12.872 ] 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.872 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.131 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.132 "name": "Existed_Raid", 00:14:13.132 "uuid": "2d8c8e90-b5ac-4d26-9861-a1a59cb8ad92", 00:14:13.132 "strip_size_kb": 64, 00:14:13.132 "state": "online", 00:14:13.132 "raid_level": "raid0", 00:14:13.132 "superblock": false, 00:14:13.132 "num_base_bdevs": 2, 00:14:13.132 "num_base_bdevs_discovered": 2, 00:14:13.132 "num_base_bdevs_operational": 2, 00:14:13.132 "base_bdevs_list": [ 00:14:13.132 { 00:14:13.132 "name": "BaseBdev1", 00:14:13.132 "uuid": "da623c21-9630-4967-be8d-761d1b4e1d15", 00:14:13.132 "is_configured": true, 00:14:13.132 "data_offset": 0, 00:14:13.132 "data_size": 65536 00:14:13.132 }, 00:14:13.132 { 00:14:13.132 "name": "BaseBdev2", 00:14:13.132 "uuid": "5cfb4cca-0341-4d17-9f64-6a8dde693a22", 00:14:13.132 "is_configured": true, 00:14:13.132 "data_offset": 0, 00:14:13.132 "data_size": 65536 00:14:13.132 } 00:14:13.132 ] 00:14:13.132 }' 00:14:13.132 16:31:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.132 16:31:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:13.700 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:13.700 [2024-07-24 16:31:10.543666] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.959 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:13.959 "name": "Existed_Raid", 00:14:13.959 "aliases": [ 00:14:13.959 "2d8c8e90-b5ac-4d26-9861-a1a59cb8ad92" 00:14:13.959 ], 00:14:13.959 "product_name": "Raid Volume", 00:14:13.959 "block_size": 512, 00:14:13.959 "num_blocks": 131072, 00:14:13.959 "uuid": "2d8c8e90-b5ac-4d26-9861-a1a59cb8ad92", 00:14:13.959 "assigned_rate_limits": { 00:14:13.959 "rw_ios_per_sec": 0, 00:14:13.959 "rw_mbytes_per_sec": 0, 00:14:13.959 "r_mbytes_per_sec": 0, 00:14:13.959 "w_mbytes_per_sec": 0 00:14:13.959 }, 00:14:13.959 "claimed": false, 00:14:13.959 "zoned": false, 00:14:13.959 "supported_io_types": { 00:14:13.959 "read": true, 00:14:13.959 "write": true, 00:14:13.959 "unmap": true, 00:14:13.959 "flush": true, 00:14:13.959 "reset": true, 00:14:13.959 "nvme_admin": false, 00:14:13.959 "nvme_io": false, 00:14:13.959 "nvme_io_md": false, 00:14:13.959 "write_zeroes": true, 00:14:13.960 "zcopy": false, 00:14:13.960 "get_zone_info": false, 00:14:13.960 "zone_management": false, 00:14:13.960 "zone_append": false, 00:14:13.960 "compare": false, 00:14:13.960 "compare_and_write": false, 00:14:13.960 "abort": false, 00:14:13.960 "seek_hole": false, 00:14:13.960 "seek_data": false, 00:14:13.960 "copy": false, 00:14:13.960 "nvme_iov_md": false 00:14:13.960 }, 00:14:13.960 "memory_domains": [ 00:14:13.960 { 00:14:13.960 "dma_device_id": "system", 00:14:13.960 "dma_device_type": 1 00:14:13.960 }, 00:14:13.960 { 00:14:13.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.960 "dma_device_type": 2 00:14:13.960 }, 00:14:13.960 { 00:14:13.960 "dma_device_id": "system", 00:14:13.960 "dma_device_type": 1 00:14:13.960 }, 00:14:13.960 { 00:14:13.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.960 "dma_device_type": 2 00:14:13.960 } 00:14:13.960 ], 00:14:13.960 "driver_specific": { 00:14:13.960 "raid": { 00:14:13.960 "uuid": "2d8c8e90-b5ac-4d26-9861-a1a59cb8ad92", 00:14:13.960 "strip_size_kb": 64, 00:14:13.960 "state": "online", 00:14:13.960 "raid_level": "raid0", 00:14:13.960 "superblock": false, 00:14:13.960 "num_base_bdevs": 2, 00:14:13.960 "num_base_bdevs_discovered": 2, 00:14:13.960 "num_base_bdevs_operational": 2, 00:14:13.960 "base_bdevs_list": [ 00:14:13.960 { 00:14:13.960 "name": "BaseBdev1", 00:14:13.960 "uuid": "da623c21-9630-4967-be8d-761d1b4e1d15", 00:14:13.960 "is_configured": true, 00:14:13.960 "data_offset": 0, 00:14:13.960 "data_size": 65536 00:14:13.960 }, 00:14:13.960 { 00:14:13.960 "name": "BaseBdev2", 00:14:13.960 "uuid": "5cfb4cca-0341-4d17-9f64-6a8dde693a22", 00:14:13.960 "is_configured": true, 00:14:13.960 "data_offset": 0, 00:14:13.960 "data_size": 65536 00:14:13.960 } 00:14:13.960 ] 00:14:13.960 } 00:14:13.960 } 00:14:13.960 }' 00:14:13.960 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:13.960 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:13.960 BaseBdev2' 00:14:13.960 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:13.960 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:13.960 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.219 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.219 "name": "BaseBdev1", 00:14:14.219 "aliases": [ 00:14:14.219 "da623c21-9630-4967-be8d-761d1b4e1d15" 00:14:14.219 ], 00:14:14.219 "product_name": "Malloc disk", 00:14:14.219 "block_size": 512, 00:14:14.219 "num_blocks": 65536, 00:14:14.219 "uuid": "da623c21-9630-4967-be8d-761d1b4e1d15", 00:14:14.219 "assigned_rate_limits": { 00:14:14.219 "rw_ios_per_sec": 0, 00:14:14.219 "rw_mbytes_per_sec": 0, 00:14:14.219 "r_mbytes_per_sec": 0, 00:14:14.219 "w_mbytes_per_sec": 0 00:14:14.219 }, 00:14:14.219 "claimed": true, 00:14:14.219 "claim_type": "exclusive_write", 00:14:14.219 "zoned": false, 00:14:14.219 "supported_io_types": { 00:14:14.219 "read": true, 00:14:14.219 "write": true, 00:14:14.219 "unmap": true, 00:14:14.219 "flush": true, 00:14:14.219 "reset": true, 00:14:14.219 "nvme_admin": false, 00:14:14.219 "nvme_io": false, 00:14:14.219 "nvme_io_md": false, 00:14:14.219 "write_zeroes": true, 00:14:14.219 "zcopy": true, 00:14:14.219 "get_zone_info": false, 00:14:14.219 "zone_management": false, 00:14:14.219 "zone_append": false, 00:14:14.219 "compare": false, 00:14:14.219 "compare_and_write": false, 00:14:14.219 "abort": true, 00:14:14.219 "seek_hole": false, 00:14:14.219 "seek_data": false, 00:14:14.219 "copy": true, 00:14:14.219 "nvme_iov_md": false 00:14:14.219 }, 00:14:14.219 "memory_domains": [ 00:14:14.219 { 00:14:14.219 "dma_device_id": "system", 00:14:14.219 "dma_device_type": 1 00:14:14.219 }, 00:14:14.219 { 00:14:14.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.219 "dma_device_type": 2 00:14:14.219 } 00:14:14.219 ], 00:14:14.219 "driver_specific": {} 00:14:14.219 }' 00:14:14.219 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.219 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.219 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.219 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.219 16:31:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.219 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.219 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.219 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.478 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:14.738 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.738 "name": "BaseBdev2", 00:14:14.738 "aliases": [ 00:14:14.738 "5cfb4cca-0341-4d17-9f64-6a8dde693a22" 00:14:14.738 ], 00:14:14.738 "product_name": "Malloc disk", 00:14:14.738 "block_size": 512, 00:14:14.738 "num_blocks": 65536, 00:14:14.738 "uuid": "5cfb4cca-0341-4d17-9f64-6a8dde693a22", 00:14:14.738 "assigned_rate_limits": { 00:14:14.738 "rw_ios_per_sec": 0, 00:14:14.738 "rw_mbytes_per_sec": 0, 00:14:14.738 "r_mbytes_per_sec": 0, 00:14:14.738 "w_mbytes_per_sec": 0 00:14:14.738 }, 00:14:14.738 "claimed": true, 00:14:14.738 "claim_type": "exclusive_write", 00:14:14.738 "zoned": false, 00:14:14.738 "supported_io_types": { 00:14:14.738 "read": true, 00:14:14.738 "write": true, 00:14:14.738 "unmap": true, 00:14:14.738 "flush": true, 00:14:14.738 "reset": true, 00:14:14.738 "nvme_admin": false, 00:14:14.738 "nvme_io": false, 00:14:14.738 "nvme_io_md": false, 00:14:14.738 "write_zeroes": true, 00:14:14.738 "zcopy": true, 00:14:14.738 "get_zone_info": false, 00:14:14.738 "zone_management": false, 00:14:14.738 "zone_append": false, 00:14:14.738 "compare": false, 00:14:14.738 "compare_and_write": false, 00:14:14.738 "abort": true, 00:14:14.738 "seek_hole": false, 00:14:14.738 "seek_data": false, 00:14:14.738 "copy": true, 00:14:14.738 "nvme_iov_md": false 00:14:14.738 }, 00:14:14.738 "memory_domains": [ 00:14:14.738 { 00:14:14.739 "dma_device_id": "system", 00:14:14.739 "dma_device_type": 1 00:14:14.739 }, 00:14:14.739 { 00:14:14.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.739 "dma_device_type": 2 00:14:14.739 } 00:14:14.739 ], 00:14:14.739 "driver_specific": {} 00:14:14.739 }' 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.739 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.997 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.997 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.997 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.997 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.997 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.998 16:31:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:15.257 [2024-07-24 16:31:11.979259] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.257 [2024-07-24 16:31:11.979294] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.257 [2024-07-24 16:31:11.979352] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.257 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.515 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.515 "name": "Existed_Raid", 00:14:15.515 "uuid": "2d8c8e90-b5ac-4d26-9861-a1a59cb8ad92", 00:14:15.515 "strip_size_kb": 64, 00:14:15.515 "state": "offline", 00:14:15.515 "raid_level": "raid0", 00:14:15.515 "superblock": false, 00:14:15.515 "num_base_bdevs": 2, 00:14:15.515 "num_base_bdevs_discovered": 1, 00:14:15.515 "num_base_bdevs_operational": 1, 00:14:15.515 "base_bdevs_list": [ 00:14:15.515 { 00:14:15.515 "name": null, 00:14:15.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.515 "is_configured": false, 00:14:15.515 "data_offset": 0, 00:14:15.515 "data_size": 65536 00:14:15.515 }, 00:14:15.515 { 00:14:15.515 "name": "BaseBdev2", 00:14:15.515 "uuid": "5cfb4cca-0341-4d17-9f64-6a8dde693a22", 00:14:15.515 "is_configured": true, 00:14:15.515 "data_offset": 0, 00:14:15.515 "data_size": 65536 00:14:15.515 } 00:14:15.515 ] 00:14:15.515 }' 00:14:15.516 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.516 16:31:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:16.084 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:16.084 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.084 16:31:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:16.343 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:16.343 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.343 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:16.602 [2024-07-24 16:31:13.265883] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.602 [2024-07-24 16:31:13.265939] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:14:16.602 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:16.602 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:16.602 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.602 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1595182 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1595182 ']' 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1595182 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1595182 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1595182' 00:14:16.861 killing process with pid 1595182 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1595182 00:14:16.861 [2024-07-24 16:31:13.712307] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.861 16:31:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1595182 00:14:17.121 [2024-07-24 16:31:13.735168] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.026 16:31:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:19.026 00:14:19.026 real 0m12.017s 00:14:19.026 user 0m19.599s 00:14:19.026 sys 0m2.101s 00:14:19.026 16:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.026 16:31:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.026 ************************************ 00:14:19.026 END TEST raid_state_function_test 00:14:19.026 ************************************ 00:14:19.026 16:31:15 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:19.026 16:31:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:19.026 16:31:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.026 16:31:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.026 ************************************ 00:14:19.026 START TEST raid_state_function_test_sb 00:14:19.026 ************************************ 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1597519 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1597519' 00:14:19.027 Process raid pid: 1597519 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1597519 /var/tmp/spdk-raid.sock 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1597519 ']' 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.027 16:31:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.027 [2024-07-24 16:31:15.616732] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:19.027 [2024-07-24 16:31:15.616849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:19.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:19.027 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:19.027 [2024-07-24 16:31:15.831354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.286 [2024-07-24 16:31:16.119236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.854 [2024-07-24 16:31:16.490821] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.854 [2024-07-24 16:31:16.490857] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.854 16:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.854 16:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:19.854 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:20.114 [2024-07-24 16:31:16.887706] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.114 [2024-07-24 16:31:16.887761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.114 [2024-07-24 16:31:16.887776] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.114 [2024-07-24 16:31:16.887793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.114 16:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.373 16:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.373 "name": "Existed_Raid", 00:14:20.373 "uuid": "cfc38f5e-23e9-4614-805d-37c5bb3e28bd", 00:14:20.373 "strip_size_kb": 64, 00:14:20.373 "state": "configuring", 00:14:20.373 "raid_level": "raid0", 00:14:20.373 "superblock": true, 00:14:20.373 "num_base_bdevs": 2, 00:14:20.373 "num_base_bdevs_discovered": 0, 00:14:20.373 "num_base_bdevs_operational": 2, 00:14:20.373 "base_bdevs_list": [ 00:14:20.373 { 00:14:20.373 "name": "BaseBdev1", 00:14:20.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.373 "is_configured": false, 00:14:20.373 "data_offset": 0, 00:14:20.373 "data_size": 0 00:14:20.373 }, 00:14:20.373 { 00:14:20.373 "name": "BaseBdev2", 00:14:20.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.373 "is_configured": false, 00:14:20.373 "data_offset": 0, 00:14:20.373 "data_size": 0 00:14:20.373 } 00:14:20.373 ] 00:14:20.373 }' 00:14:20.373 16:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.373 16:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.941 16:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:21.200 [2024-07-24 16:31:17.906447] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.200 [2024-07-24 16:31:17.906485] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:14:21.200 16:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:21.459 [2024-07-24 16:31:18.078951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.459 [2024-07-24 16:31:18.078991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.459 [2024-07-24 16:31:18.079004] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.459 [2024-07-24 16:31:18.079020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.459 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.459 [2024-07-24 16:31:18.302677] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.459 BaseBdev1 00:14:21.719 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:21.719 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:21.719 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.720 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.720 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.720 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.720 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.720 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.979 [ 00:14:21.979 { 00:14:21.979 "name": "BaseBdev1", 00:14:21.979 "aliases": [ 00:14:21.979 "53ba2f9f-97da-4ad9-aec6-cac2407dc809" 00:14:21.979 ], 00:14:21.979 "product_name": "Malloc disk", 00:14:21.979 "block_size": 512, 00:14:21.979 "num_blocks": 65536, 00:14:21.979 "uuid": "53ba2f9f-97da-4ad9-aec6-cac2407dc809", 00:14:21.979 "assigned_rate_limits": { 00:14:21.979 "rw_ios_per_sec": 0, 00:14:21.979 "rw_mbytes_per_sec": 0, 00:14:21.979 "r_mbytes_per_sec": 0, 00:14:21.979 "w_mbytes_per_sec": 0 00:14:21.979 }, 00:14:21.979 "claimed": true, 00:14:21.979 "claim_type": "exclusive_write", 00:14:21.979 "zoned": false, 00:14:21.979 "supported_io_types": { 00:14:21.979 "read": true, 00:14:21.979 "write": true, 00:14:21.979 "unmap": true, 00:14:21.979 "flush": true, 00:14:21.979 "reset": true, 00:14:21.979 "nvme_admin": false, 00:14:21.979 "nvme_io": false, 00:14:21.979 "nvme_io_md": false, 00:14:21.979 "write_zeroes": true, 00:14:21.979 "zcopy": true, 00:14:21.979 "get_zone_info": false, 00:14:21.979 "zone_management": false, 00:14:21.979 "zone_append": false, 00:14:21.979 "compare": false, 00:14:21.979 "compare_and_write": false, 00:14:21.979 "abort": true, 00:14:21.979 "seek_hole": false, 00:14:21.979 "seek_data": false, 00:14:21.979 "copy": true, 00:14:21.979 "nvme_iov_md": false 00:14:21.979 }, 00:14:21.979 "memory_domains": [ 00:14:21.979 { 00:14:21.979 "dma_device_id": "system", 00:14:21.979 "dma_device_type": 1 00:14:21.979 }, 00:14:21.979 { 00:14:21.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.979 "dma_device_type": 2 00:14:21.979 } 00:14:21.979 ], 00:14:21.979 "driver_specific": {} 00:14:21.979 } 00:14:21.979 ] 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.979 "name": "Existed_Raid", 00:14:21.979 "uuid": "b2660ece-8fea-4d5d-821a-02a9c490dbec", 00:14:21.979 "strip_size_kb": 64, 00:14:21.979 "state": "configuring", 00:14:21.979 "raid_level": "raid0", 00:14:21.979 "superblock": true, 00:14:21.979 "num_base_bdevs": 2, 00:14:21.979 "num_base_bdevs_discovered": 1, 00:14:21.979 "num_base_bdevs_operational": 2, 00:14:21.979 "base_bdevs_list": [ 00:14:21.979 { 00:14:21.979 "name": "BaseBdev1", 00:14:21.979 "uuid": "53ba2f9f-97da-4ad9-aec6-cac2407dc809", 00:14:21.979 "is_configured": true, 00:14:21.979 "data_offset": 2048, 00:14:21.979 "data_size": 63488 00:14:21.979 }, 00:14:21.979 { 00:14:21.979 "name": "BaseBdev2", 00:14:21.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.979 "is_configured": false, 00:14:21.979 "data_offset": 0, 00:14:21.979 "data_size": 0 00:14:21.979 } 00:14:21.979 ] 00:14:21.979 }' 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.979 16:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.547 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:22.806 [2024-07-24 16:31:19.542061] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.806 [2024-07-24 16:31:19.542110] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:14:22.806 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.065 [2024-07-24 16:31:19.710623] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.065 [2024-07-24 16:31:19.712911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.065 [2024-07-24 16:31:19.712953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.065 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.065 "name": "Existed_Raid", 00:14:23.065 "uuid": "ac9e3525-eb99-49a6-90aa-26912d3df32f", 00:14:23.065 "strip_size_kb": 64, 00:14:23.065 "state": "configuring", 00:14:23.065 "raid_level": "raid0", 00:14:23.065 "superblock": true, 00:14:23.065 "num_base_bdevs": 2, 00:14:23.065 "num_base_bdevs_discovered": 1, 00:14:23.065 "num_base_bdevs_operational": 2, 00:14:23.066 "base_bdevs_list": [ 00:14:23.066 { 00:14:23.066 "name": "BaseBdev1", 00:14:23.066 "uuid": "53ba2f9f-97da-4ad9-aec6-cac2407dc809", 00:14:23.066 "is_configured": true, 00:14:23.066 "data_offset": 2048, 00:14:23.066 "data_size": 63488 00:14:23.066 }, 00:14:23.066 { 00:14:23.066 "name": "BaseBdev2", 00:14:23.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.066 "is_configured": false, 00:14:23.066 "data_offset": 0, 00:14:23.066 "data_size": 0 00:14:23.066 } 00:14:23.066 ] 00:14:23.066 }' 00:14:23.066 16:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.066 16:31:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.692 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.951 [2024-07-24 16:31:20.622988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.951 [2024-07-24 16:31:20.623263] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:14:23.951 [2024-07-24 16:31:20.623284] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:23.951 [2024-07-24 16:31:20.623616] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:23.951 [2024-07-24 16:31:20.623842] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:14:23.951 [2024-07-24 16:31:20.623861] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:14:23.951 [2024-07-24 16:31:20.624055] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.951 BaseBdev2 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:23.951 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.210 [ 00:14:24.210 { 00:14:24.210 "name": "BaseBdev2", 00:14:24.210 "aliases": [ 00:14:24.210 "21bcdf16-123a-452e-83ce-81cea9ffc3bc" 00:14:24.210 ], 00:14:24.210 "product_name": "Malloc disk", 00:14:24.210 "block_size": 512, 00:14:24.210 "num_blocks": 65536, 00:14:24.210 "uuid": "21bcdf16-123a-452e-83ce-81cea9ffc3bc", 00:14:24.210 "assigned_rate_limits": { 00:14:24.210 "rw_ios_per_sec": 0, 00:14:24.210 "rw_mbytes_per_sec": 0, 00:14:24.210 "r_mbytes_per_sec": 0, 00:14:24.210 "w_mbytes_per_sec": 0 00:14:24.210 }, 00:14:24.210 "claimed": true, 00:14:24.210 "claim_type": "exclusive_write", 00:14:24.210 "zoned": false, 00:14:24.210 "supported_io_types": { 00:14:24.210 "read": true, 00:14:24.210 "write": true, 00:14:24.210 "unmap": true, 00:14:24.210 "flush": true, 00:14:24.210 "reset": true, 00:14:24.210 "nvme_admin": false, 00:14:24.210 "nvme_io": false, 00:14:24.210 "nvme_io_md": false, 00:14:24.210 "write_zeroes": true, 00:14:24.210 "zcopy": true, 00:14:24.210 "get_zone_info": false, 00:14:24.210 "zone_management": false, 00:14:24.210 "zone_append": false, 00:14:24.210 "compare": false, 00:14:24.210 "compare_and_write": false, 00:14:24.210 "abort": true, 00:14:24.210 "seek_hole": false, 00:14:24.210 "seek_data": false, 00:14:24.210 "copy": true, 00:14:24.210 "nvme_iov_md": false 00:14:24.210 }, 00:14:24.210 "memory_domains": [ 00:14:24.210 { 00:14:24.210 "dma_device_id": "system", 00:14:24.210 "dma_device_type": 1 00:14:24.210 }, 00:14:24.210 { 00:14:24.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.210 "dma_device_type": 2 00:14:24.210 } 00:14:24.210 ], 00:14:24.210 "driver_specific": {} 00:14:24.210 } 00:14:24.210 ] 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.210 16:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.469 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.469 "name": "Existed_Raid", 00:14:24.469 "uuid": "ac9e3525-eb99-49a6-90aa-26912d3df32f", 00:14:24.469 "strip_size_kb": 64, 00:14:24.469 "state": "online", 00:14:24.469 "raid_level": "raid0", 00:14:24.469 "superblock": true, 00:14:24.469 "num_base_bdevs": 2, 00:14:24.469 "num_base_bdevs_discovered": 2, 00:14:24.469 "num_base_bdevs_operational": 2, 00:14:24.469 "base_bdevs_list": [ 00:14:24.469 { 00:14:24.469 "name": "BaseBdev1", 00:14:24.469 "uuid": "53ba2f9f-97da-4ad9-aec6-cac2407dc809", 00:14:24.469 "is_configured": true, 00:14:24.469 "data_offset": 2048, 00:14:24.469 "data_size": 63488 00:14:24.469 }, 00:14:24.469 { 00:14:24.469 "name": "BaseBdev2", 00:14:24.469 "uuid": "21bcdf16-123a-452e-83ce-81cea9ffc3bc", 00:14:24.469 "is_configured": true, 00:14:24.469 "data_offset": 2048, 00:14:24.469 "data_size": 63488 00:14:24.469 } 00:14:24.469 ] 00:14:24.469 }' 00:14:24.469 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.469 16:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.037 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:25.038 16:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:25.297 [2024-07-24 16:31:21.987062] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.297 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:25.297 "name": "Existed_Raid", 00:14:25.297 "aliases": [ 00:14:25.297 "ac9e3525-eb99-49a6-90aa-26912d3df32f" 00:14:25.297 ], 00:14:25.297 "product_name": "Raid Volume", 00:14:25.297 "block_size": 512, 00:14:25.297 "num_blocks": 126976, 00:14:25.297 "uuid": "ac9e3525-eb99-49a6-90aa-26912d3df32f", 00:14:25.297 "assigned_rate_limits": { 00:14:25.297 "rw_ios_per_sec": 0, 00:14:25.297 "rw_mbytes_per_sec": 0, 00:14:25.297 "r_mbytes_per_sec": 0, 00:14:25.297 "w_mbytes_per_sec": 0 00:14:25.297 }, 00:14:25.297 "claimed": false, 00:14:25.297 "zoned": false, 00:14:25.297 "supported_io_types": { 00:14:25.297 "read": true, 00:14:25.297 "write": true, 00:14:25.297 "unmap": true, 00:14:25.297 "flush": true, 00:14:25.297 "reset": true, 00:14:25.297 "nvme_admin": false, 00:14:25.297 "nvme_io": false, 00:14:25.297 "nvme_io_md": false, 00:14:25.297 "write_zeroes": true, 00:14:25.297 "zcopy": false, 00:14:25.297 "get_zone_info": false, 00:14:25.297 "zone_management": false, 00:14:25.297 "zone_append": false, 00:14:25.297 "compare": false, 00:14:25.297 "compare_and_write": false, 00:14:25.297 "abort": false, 00:14:25.297 "seek_hole": false, 00:14:25.297 "seek_data": false, 00:14:25.297 "copy": false, 00:14:25.297 "nvme_iov_md": false 00:14:25.297 }, 00:14:25.297 "memory_domains": [ 00:14:25.297 { 00:14:25.297 "dma_device_id": "system", 00:14:25.297 "dma_device_type": 1 00:14:25.297 }, 00:14:25.297 { 00:14:25.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.297 "dma_device_type": 2 00:14:25.297 }, 00:14:25.297 { 00:14:25.297 "dma_device_id": "system", 00:14:25.297 "dma_device_type": 1 00:14:25.297 }, 00:14:25.297 { 00:14:25.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.297 "dma_device_type": 2 00:14:25.297 } 00:14:25.297 ], 00:14:25.297 "driver_specific": { 00:14:25.297 "raid": { 00:14:25.297 "uuid": "ac9e3525-eb99-49a6-90aa-26912d3df32f", 00:14:25.297 "strip_size_kb": 64, 00:14:25.297 "state": "online", 00:14:25.297 "raid_level": "raid0", 00:14:25.297 "superblock": true, 00:14:25.297 "num_base_bdevs": 2, 00:14:25.297 "num_base_bdevs_discovered": 2, 00:14:25.297 "num_base_bdevs_operational": 2, 00:14:25.297 "base_bdevs_list": [ 00:14:25.297 { 00:14:25.297 "name": "BaseBdev1", 00:14:25.297 "uuid": "53ba2f9f-97da-4ad9-aec6-cac2407dc809", 00:14:25.297 "is_configured": true, 00:14:25.297 "data_offset": 2048, 00:14:25.297 "data_size": 63488 00:14:25.297 }, 00:14:25.297 { 00:14:25.297 "name": "BaseBdev2", 00:14:25.297 "uuid": "21bcdf16-123a-452e-83ce-81cea9ffc3bc", 00:14:25.297 "is_configured": true, 00:14:25.297 "data_offset": 2048, 00:14:25.297 "data_size": 63488 00:14:25.297 } 00:14:25.297 ] 00:14:25.297 } 00:14:25.297 } 00:14:25.297 }' 00:14:25.297 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.297 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:25.297 BaseBdev2' 00:14:25.297 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:25.297 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:25.297 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:25.556 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:25.556 "name": "BaseBdev1", 00:14:25.556 "aliases": [ 00:14:25.556 "53ba2f9f-97da-4ad9-aec6-cac2407dc809" 00:14:25.556 ], 00:14:25.556 "product_name": "Malloc disk", 00:14:25.556 "block_size": 512, 00:14:25.556 "num_blocks": 65536, 00:14:25.556 "uuid": "53ba2f9f-97da-4ad9-aec6-cac2407dc809", 00:14:25.556 "assigned_rate_limits": { 00:14:25.556 "rw_ios_per_sec": 0, 00:14:25.556 "rw_mbytes_per_sec": 0, 00:14:25.556 "r_mbytes_per_sec": 0, 00:14:25.556 "w_mbytes_per_sec": 0 00:14:25.556 }, 00:14:25.556 "claimed": true, 00:14:25.556 "claim_type": "exclusive_write", 00:14:25.556 "zoned": false, 00:14:25.556 "supported_io_types": { 00:14:25.556 "read": true, 00:14:25.556 "write": true, 00:14:25.556 "unmap": true, 00:14:25.556 "flush": true, 00:14:25.556 "reset": true, 00:14:25.556 "nvme_admin": false, 00:14:25.556 "nvme_io": false, 00:14:25.556 "nvme_io_md": false, 00:14:25.556 "write_zeroes": true, 00:14:25.556 "zcopy": true, 00:14:25.556 "get_zone_info": false, 00:14:25.556 "zone_management": false, 00:14:25.556 "zone_append": false, 00:14:25.556 "compare": false, 00:14:25.556 "compare_and_write": false, 00:14:25.556 "abort": true, 00:14:25.557 "seek_hole": false, 00:14:25.557 "seek_data": false, 00:14:25.557 "copy": true, 00:14:25.557 "nvme_iov_md": false 00:14:25.557 }, 00:14:25.557 "memory_domains": [ 00:14:25.557 { 00:14:25.557 "dma_device_id": "system", 00:14:25.557 "dma_device_type": 1 00:14:25.557 }, 00:14:25.557 { 00:14:25.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.557 "dma_device_type": 2 00:14:25.557 } 00:14:25.557 ], 00:14:25.557 "driver_specific": {} 00:14:25.557 }' 00:14:25.557 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.557 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:25.557 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:25.557 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:25.815 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:26.074 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:26.074 "name": "BaseBdev2", 00:14:26.074 "aliases": [ 00:14:26.074 "21bcdf16-123a-452e-83ce-81cea9ffc3bc" 00:14:26.074 ], 00:14:26.074 "product_name": "Malloc disk", 00:14:26.074 "block_size": 512, 00:14:26.074 "num_blocks": 65536, 00:14:26.074 "uuid": "21bcdf16-123a-452e-83ce-81cea9ffc3bc", 00:14:26.074 "assigned_rate_limits": { 00:14:26.074 "rw_ios_per_sec": 0, 00:14:26.074 "rw_mbytes_per_sec": 0, 00:14:26.074 "r_mbytes_per_sec": 0, 00:14:26.074 "w_mbytes_per_sec": 0 00:14:26.074 }, 00:14:26.074 "claimed": true, 00:14:26.074 "claim_type": "exclusive_write", 00:14:26.074 "zoned": false, 00:14:26.074 "supported_io_types": { 00:14:26.074 "read": true, 00:14:26.074 "write": true, 00:14:26.074 "unmap": true, 00:14:26.074 "flush": true, 00:14:26.074 "reset": true, 00:14:26.074 "nvme_admin": false, 00:14:26.074 "nvme_io": false, 00:14:26.074 "nvme_io_md": false, 00:14:26.074 "write_zeroes": true, 00:14:26.074 "zcopy": true, 00:14:26.074 "get_zone_info": false, 00:14:26.074 "zone_management": false, 00:14:26.074 "zone_append": false, 00:14:26.074 "compare": false, 00:14:26.074 "compare_and_write": false, 00:14:26.074 "abort": true, 00:14:26.074 "seek_hole": false, 00:14:26.074 "seek_data": false, 00:14:26.074 "copy": true, 00:14:26.074 "nvme_iov_md": false 00:14:26.074 }, 00:14:26.074 "memory_domains": [ 00:14:26.074 { 00:14:26.074 "dma_device_id": "system", 00:14:26.074 "dma_device_type": 1 00:14:26.074 }, 00:14:26.074 { 00:14:26.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.074 "dma_device_type": 2 00:14:26.074 } 00:14:26.074 ], 00:14:26.074 "driver_specific": {} 00:14:26.074 }' 00:14:26.074 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.074 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:26.332 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:26.332 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.332 16:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:26.332 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:26.332 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:26.332 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:26.332 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:26.332 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:26.332 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:26.590 [2024-07-24 16:31:23.394712] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:26.590 [2024-07-24 16:31:23.394748] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.590 [2024-07-24 16:31:23.394808] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:26.590 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.591 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.849 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.849 "name": "Existed_Raid", 00:14:26.849 "uuid": "ac9e3525-eb99-49a6-90aa-26912d3df32f", 00:14:26.849 "strip_size_kb": 64, 00:14:26.849 "state": "offline", 00:14:26.849 "raid_level": "raid0", 00:14:26.849 "superblock": true, 00:14:26.849 "num_base_bdevs": 2, 00:14:26.849 "num_base_bdevs_discovered": 1, 00:14:26.850 "num_base_bdevs_operational": 1, 00:14:26.850 "base_bdevs_list": [ 00:14:26.850 { 00:14:26.850 "name": null, 00:14:26.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.850 "is_configured": false, 00:14:26.850 "data_offset": 2048, 00:14:26.850 "data_size": 63488 00:14:26.850 }, 00:14:26.850 { 00:14:26.850 "name": "BaseBdev2", 00:14:26.850 "uuid": "21bcdf16-123a-452e-83ce-81cea9ffc3bc", 00:14:26.850 "is_configured": true, 00:14:26.850 "data_offset": 2048, 00:14:26.850 "data_size": 63488 00:14:26.850 } 00:14:26.850 ] 00:14:26.850 }' 00:14:26.850 16:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.850 16:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.416 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:27.416 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:27.416 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.416 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:27.675 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:27.675 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.675 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:27.934 [2024-07-24 16:31:24.678296] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.934 [2024-07-24 16:31:24.678353] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1597519 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1597519 ']' 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1597519 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.193 16:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1597519 00:14:28.193 16:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.193 16:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.193 16:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1597519' 00:14:28.193 killing process with pid 1597519 00:14:28.193 16:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1597519 00:14:28.193 [2024-07-24 16:31:25.018985] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.193 16:31:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1597519 00:14:28.193 [2024-07-24 16:31:25.042925] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.100 16:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:30.100 00:14:30.100 real 0m11.232s 00:14:30.100 user 0m18.200s 00:14:30.100 sys 0m1.885s 00:14:30.100 16:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.100 16:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.100 ************************************ 00:14:30.100 END TEST raid_state_function_test_sb 00:14:30.100 ************************************ 00:14:30.100 16:31:26 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:30.100 16:31:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:30.100 16:31:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.100 16:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.100 ************************************ 00:14:30.100 START TEST raid_superblock_test 00:14:30.100 ************************************ 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1599604 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1599604 /var/tmp/spdk-raid.sock 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1599604 ']' 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:30.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.100 16:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.100 [2024-07-24 16:31:26.920358] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:30.100 [2024-07-24 16:31:26.920484] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1599604 ] 00:14:30.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.359 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:30.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.359 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:30.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.359 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:30.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:30.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:30.360 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:30.360 [2024-07-24 16:31:27.145809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.619 [2024-07-24 16:31:27.415940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.185 [2024-07-24 16:31:27.755814] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.185 [2024-07-24 16:31:27.755848] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:31.185 16:31:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:31.443 malloc1 00:14:31.443 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:31.701 [2024-07-24 16:31:28.433872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:31.701 [2024-07-24 16:31:28.433933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.701 [2024-07-24 16:31:28.433964] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:14:31.701 [2024-07-24 16:31:28.433984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.701 [2024-07-24 16:31:28.436732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.702 [2024-07-24 16:31:28.436766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:31.702 pt1 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:31.702 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:31.960 malloc2 00:14:31.960 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.218 [2024-07-24 16:31:28.934703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.218 [2024-07-24 16:31:28.934764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.218 [2024-07-24 16:31:28.934793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:14:32.218 [2024-07-24 16:31:28.934809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.218 [2024-07-24 16:31:28.937525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.218 [2024-07-24 16:31:28.937565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.218 pt2 00:14:32.218 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:32.218 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:32.218 16:31:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:32.476 [2024-07-24 16:31:29.151309] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:32.476 [2024-07-24 16:31:29.153569] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.476 [2024-07-24 16:31:29.153783] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:14:32.476 [2024-07-24 16:31:29.153802] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:32.476 [2024-07-24 16:31:29.154134] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:32.476 [2024-07-24 16:31:29.154381] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:14:32.476 [2024-07-24 16:31:29.154399] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:14:32.476 [2024-07-24 16:31:29.154599] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.476 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:32.476 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.476 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.477 "name": "raid_bdev1", 00:14:32.477 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:32.477 "strip_size_kb": 64, 00:14:32.477 "state": "online", 00:14:32.477 "raid_level": "raid0", 00:14:32.477 "superblock": true, 00:14:32.477 "num_base_bdevs": 2, 00:14:32.477 "num_base_bdevs_discovered": 2, 00:14:32.477 "num_base_bdevs_operational": 2, 00:14:32.477 "base_bdevs_list": [ 00:14:32.477 { 00:14:32.477 "name": "pt1", 00:14:32.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:32.477 "is_configured": true, 00:14:32.477 "data_offset": 2048, 00:14:32.477 "data_size": 63488 00:14:32.477 }, 00:14:32.477 { 00:14:32.477 "name": "pt2", 00:14:32.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.477 "is_configured": true, 00:14:32.477 "data_offset": 2048, 00:14:32.477 "data_size": 63488 00:14:32.477 } 00:14:32.477 ] 00:14:32.477 }' 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.477 16:31:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:33.044 16:31:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:33.302 [2024-07-24 16:31:30.025940] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.302 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:33.302 "name": "raid_bdev1", 00:14:33.302 "aliases": [ 00:14:33.302 "cc6b8e79-4964-41ed-a23a-29d2a94c36d5" 00:14:33.302 ], 00:14:33.302 "product_name": "Raid Volume", 00:14:33.302 "block_size": 512, 00:14:33.302 "num_blocks": 126976, 00:14:33.302 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:33.302 "assigned_rate_limits": { 00:14:33.302 "rw_ios_per_sec": 0, 00:14:33.302 "rw_mbytes_per_sec": 0, 00:14:33.302 "r_mbytes_per_sec": 0, 00:14:33.302 "w_mbytes_per_sec": 0 00:14:33.302 }, 00:14:33.302 "claimed": false, 00:14:33.302 "zoned": false, 00:14:33.302 "supported_io_types": { 00:14:33.302 "read": true, 00:14:33.302 "write": true, 00:14:33.302 "unmap": true, 00:14:33.302 "flush": true, 00:14:33.302 "reset": true, 00:14:33.302 "nvme_admin": false, 00:14:33.302 "nvme_io": false, 00:14:33.302 "nvme_io_md": false, 00:14:33.302 "write_zeroes": true, 00:14:33.302 "zcopy": false, 00:14:33.302 "get_zone_info": false, 00:14:33.302 "zone_management": false, 00:14:33.302 "zone_append": false, 00:14:33.302 "compare": false, 00:14:33.302 "compare_and_write": false, 00:14:33.302 "abort": false, 00:14:33.302 "seek_hole": false, 00:14:33.302 "seek_data": false, 00:14:33.302 "copy": false, 00:14:33.302 "nvme_iov_md": false 00:14:33.302 }, 00:14:33.302 "memory_domains": [ 00:14:33.302 { 00:14:33.302 "dma_device_id": "system", 00:14:33.302 "dma_device_type": 1 00:14:33.302 }, 00:14:33.302 { 00:14:33.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.302 "dma_device_type": 2 00:14:33.302 }, 00:14:33.302 { 00:14:33.302 "dma_device_id": "system", 00:14:33.302 "dma_device_type": 1 00:14:33.302 }, 00:14:33.302 { 00:14:33.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.302 "dma_device_type": 2 00:14:33.302 } 00:14:33.302 ], 00:14:33.302 "driver_specific": { 00:14:33.302 "raid": { 00:14:33.302 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:33.302 "strip_size_kb": 64, 00:14:33.302 "state": "online", 00:14:33.303 "raid_level": "raid0", 00:14:33.303 "superblock": true, 00:14:33.303 "num_base_bdevs": 2, 00:14:33.303 "num_base_bdevs_discovered": 2, 00:14:33.303 "num_base_bdevs_operational": 2, 00:14:33.303 "base_bdevs_list": [ 00:14:33.303 { 00:14:33.303 "name": "pt1", 00:14:33.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.303 "is_configured": true, 00:14:33.303 "data_offset": 2048, 00:14:33.303 "data_size": 63488 00:14:33.303 }, 00:14:33.303 { 00:14:33.303 "name": "pt2", 00:14:33.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.303 "is_configured": true, 00:14:33.303 "data_offset": 2048, 00:14:33.303 "data_size": 63488 00:14:33.303 } 00:14:33.303 ] 00:14:33.303 } 00:14:33.303 } 00:14:33.303 }' 00:14:33.303 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.303 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:33.303 pt2' 00:14:33.303 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.303 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:33.303 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.561 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.561 "name": "pt1", 00:14:33.561 "aliases": [ 00:14:33.561 "00000000-0000-0000-0000-000000000001" 00:14:33.561 ], 00:14:33.561 "product_name": "passthru", 00:14:33.561 "block_size": 512, 00:14:33.561 "num_blocks": 65536, 00:14:33.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:33.561 "assigned_rate_limits": { 00:14:33.561 "rw_ios_per_sec": 0, 00:14:33.561 "rw_mbytes_per_sec": 0, 00:14:33.561 "r_mbytes_per_sec": 0, 00:14:33.561 "w_mbytes_per_sec": 0 00:14:33.561 }, 00:14:33.561 "claimed": true, 00:14:33.561 "claim_type": "exclusive_write", 00:14:33.561 "zoned": false, 00:14:33.561 "supported_io_types": { 00:14:33.561 "read": true, 00:14:33.561 "write": true, 00:14:33.561 "unmap": true, 00:14:33.561 "flush": true, 00:14:33.561 "reset": true, 00:14:33.561 "nvme_admin": false, 00:14:33.561 "nvme_io": false, 00:14:33.561 "nvme_io_md": false, 00:14:33.561 "write_zeroes": true, 00:14:33.561 "zcopy": true, 00:14:33.561 "get_zone_info": false, 00:14:33.561 "zone_management": false, 00:14:33.561 "zone_append": false, 00:14:33.561 "compare": false, 00:14:33.561 "compare_and_write": false, 00:14:33.561 "abort": true, 00:14:33.561 "seek_hole": false, 00:14:33.561 "seek_data": false, 00:14:33.561 "copy": true, 00:14:33.561 "nvme_iov_md": false 00:14:33.561 }, 00:14:33.561 "memory_domains": [ 00:14:33.561 { 00:14:33.561 "dma_device_id": "system", 00:14:33.561 "dma_device_type": 1 00:14:33.561 }, 00:14:33.561 { 00:14:33.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.561 "dma_device_type": 2 00:14:33.561 } 00:14:33.561 ], 00:14:33.561 "driver_specific": { 00:14:33.561 "passthru": { 00:14:33.561 "name": "pt1", 00:14:33.561 "base_bdev_name": "malloc1" 00:14:33.561 } 00:14:33.561 } 00:14:33.561 }' 00:14:33.561 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.561 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.561 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:33.561 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:33.820 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:34.078 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:34.078 "name": "pt2", 00:14:34.079 "aliases": [ 00:14:34.079 "00000000-0000-0000-0000-000000000002" 00:14:34.079 ], 00:14:34.079 "product_name": "passthru", 00:14:34.079 "block_size": 512, 00:14:34.079 "num_blocks": 65536, 00:14:34.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.079 "assigned_rate_limits": { 00:14:34.079 "rw_ios_per_sec": 0, 00:14:34.079 "rw_mbytes_per_sec": 0, 00:14:34.079 "r_mbytes_per_sec": 0, 00:14:34.079 "w_mbytes_per_sec": 0 00:14:34.079 }, 00:14:34.079 "claimed": true, 00:14:34.079 "claim_type": "exclusive_write", 00:14:34.079 "zoned": false, 00:14:34.079 "supported_io_types": { 00:14:34.079 "read": true, 00:14:34.079 "write": true, 00:14:34.079 "unmap": true, 00:14:34.079 "flush": true, 00:14:34.079 "reset": true, 00:14:34.079 "nvme_admin": false, 00:14:34.079 "nvme_io": false, 00:14:34.079 "nvme_io_md": false, 00:14:34.079 "write_zeroes": true, 00:14:34.079 "zcopy": true, 00:14:34.079 "get_zone_info": false, 00:14:34.079 "zone_management": false, 00:14:34.079 "zone_append": false, 00:14:34.079 "compare": false, 00:14:34.079 "compare_and_write": false, 00:14:34.079 "abort": true, 00:14:34.079 "seek_hole": false, 00:14:34.079 "seek_data": false, 00:14:34.079 "copy": true, 00:14:34.079 "nvme_iov_md": false 00:14:34.079 }, 00:14:34.079 "memory_domains": [ 00:14:34.079 { 00:14:34.079 "dma_device_id": "system", 00:14:34.079 "dma_device_type": 1 00:14:34.079 }, 00:14:34.079 { 00:14:34.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.079 "dma_device_type": 2 00:14:34.079 } 00:14:34.079 ], 00:14:34.079 "driver_specific": { 00:14:34.079 "passthru": { 00:14:34.079 "name": "pt2", 00:14:34.079 "base_bdev_name": "malloc2" 00:14:34.079 } 00:14:34.079 } 00:14:34.079 }' 00:14:34.079 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.079 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:34.079 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:34.079 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.337 16:31:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:34.337 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:34.597 [2024-07-24 16:31:31.377622] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.597 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=cc6b8e79-4964-41ed-a23a-29d2a94c36d5 00:14:34.597 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z cc6b8e79-4964-41ed-a23a-29d2a94c36d5 ']' 00:14:34.597 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:34.856 [2024-07-24 16:31:31.605926] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:34.856 [2024-07-24 16:31:31.605956] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.856 [2024-07-24 16:31:31.606047] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.856 [2024-07-24 16:31:31.606106] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.856 [2024-07-24 16:31:31.606128] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:14:34.856 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.856 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:35.115 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:35.115 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:35.115 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.115 16:31:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:35.374 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:35.374 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:35.633 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:35.633 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:35.893 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:35.893 [2024-07-24 16:31:32.748987] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:35.893 [2024-07-24 16:31:32.751409] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:35.893 [2024-07-24 16:31:32.751483] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:35.893 [2024-07-24 16:31:32.751543] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:35.893 [2024-07-24 16:31:32.751567] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.893 [2024-07-24 16:31:32.751584] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:14:35.893 request: 00:14:35.893 { 00:14:35.893 "name": "raid_bdev1", 00:14:35.893 "raid_level": "raid0", 00:14:35.893 "base_bdevs": [ 00:14:35.893 "malloc1", 00:14:35.893 "malloc2" 00:14:35.893 ], 00:14:35.893 "strip_size_kb": 64, 00:14:35.893 "superblock": false, 00:14:35.893 "method": "bdev_raid_create", 00:14:35.893 "req_id": 1 00:14:35.893 } 00:14:35.893 Got JSON-RPC error response 00:14:35.893 response: 00:14:35.893 { 00:14:35.893 "code": -17, 00:14:35.893 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:35.893 } 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:36.152 16:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.411 [2024-07-24 16:31:33.206124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.411 [2024-07-24 16:31:33.206200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.411 [2024-07-24 16:31:33.206229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:14:36.411 [2024-07-24 16:31:33.206248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.411 [2024-07-24 16:31:33.208999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.411 [2024-07-24 16:31:33.209036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.411 [2024-07-24 16:31:33.209135] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:36.411 [2024-07-24 16:31:33.209225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.411 pt1 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.411 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.670 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.670 "name": "raid_bdev1", 00:14:36.670 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:36.671 "strip_size_kb": 64, 00:14:36.671 "state": "configuring", 00:14:36.671 "raid_level": "raid0", 00:14:36.671 "superblock": true, 00:14:36.671 "num_base_bdevs": 2, 00:14:36.671 "num_base_bdevs_discovered": 1, 00:14:36.671 "num_base_bdevs_operational": 2, 00:14:36.671 "base_bdevs_list": [ 00:14:36.671 { 00:14:36.671 "name": "pt1", 00:14:36.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.671 "is_configured": true, 00:14:36.671 "data_offset": 2048, 00:14:36.671 "data_size": 63488 00:14:36.671 }, 00:14:36.671 { 00:14:36.671 "name": null, 00:14:36.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.671 "is_configured": false, 00:14:36.671 "data_offset": 2048, 00:14:36.671 "data_size": 63488 00:14:36.671 } 00:14:36.671 ] 00:14:36.671 }' 00:14:36.671 16:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.671 16:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.290 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:14:37.290 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:37.290 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:37.290 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.548 [2024-07-24 16:31:34.240919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.548 [2024-07-24 16:31:34.240989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.548 [2024-07-24 16:31:34.241016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:14:37.549 [2024-07-24 16:31:34.241035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.549 [2024-07-24 16:31:34.241664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.549 [2024-07-24 16:31:34.241696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.549 [2024-07-24 16:31:34.241795] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:37.549 [2024-07-24 16:31:34.241832] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.549 [2024-07-24 16:31:34.242000] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:14:37.549 [2024-07-24 16:31:34.242018] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.549 [2024-07-24 16:31:34.242320] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:37.549 [2024-07-24 16:31:34.242536] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:14:37.549 [2024-07-24 16:31:34.242551] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:14:37.549 [2024-07-24 16:31:34.242742] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.549 pt2 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.549 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.808 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.808 "name": "raid_bdev1", 00:14:37.808 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:37.808 "strip_size_kb": 64, 00:14:37.808 "state": "online", 00:14:37.808 "raid_level": "raid0", 00:14:37.808 "superblock": true, 00:14:37.808 "num_base_bdevs": 2, 00:14:37.808 "num_base_bdevs_discovered": 2, 00:14:37.808 "num_base_bdevs_operational": 2, 00:14:37.808 "base_bdevs_list": [ 00:14:37.808 { 00:14:37.808 "name": "pt1", 00:14:37.808 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.808 "is_configured": true, 00:14:37.808 "data_offset": 2048, 00:14:37.808 "data_size": 63488 00:14:37.808 }, 00:14:37.808 { 00:14:37.808 "name": "pt2", 00:14:37.808 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.808 "is_configured": true, 00:14:37.808 "data_offset": 2048, 00:14:37.808 "data_size": 63488 00:14:37.808 } 00:14:37.808 ] 00:14:37.808 }' 00:14:37.808 16:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.808 16:31:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:38.375 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:38.634 [2024-07-24 16:31:35.284100] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.634 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:38.634 "name": "raid_bdev1", 00:14:38.634 "aliases": [ 00:14:38.634 "cc6b8e79-4964-41ed-a23a-29d2a94c36d5" 00:14:38.634 ], 00:14:38.634 "product_name": "Raid Volume", 00:14:38.634 "block_size": 512, 00:14:38.634 "num_blocks": 126976, 00:14:38.634 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:38.634 "assigned_rate_limits": { 00:14:38.634 "rw_ios_per_sec": 0, 00:14:38.634 "rw_mbytes_per_sec": 0, 00:14:38.634 "r_mbytes_per_sec": 0, 00:14:38.634 "w_mbytes_per_sec": 0 00:14:38.634 }, 00:14:38.634 "claimed": false, 00:14:38.634 "zoned": false, 00:14:38.634 "supported_io_types": { 00:14:38.634 "read": true, 00:14:38.634 "write": true, 00:14:38.634 "unmap": true, 00:14:38.634 "flush": true, 00:14:38.634 "reset": true, 00:14:38.634 "nvme_admin": false, 00:14:38.634 "nvme_io": false, 00:14:38.634 "nvme_io_md": false, 00:14:38.634 "write_zeroes": true, 00:14:38.634 "zcopy": false, 00:14:38.634 "get_zone_info": false, 00:14:38.634 "zone_management": false, 00:14:38.634 "zone_append": false, 00:14:38.634 "compare": false, 00:14:38.634 "compare_and_write": false, 00:14:38.634 "abort": false, 00:14:38.634 "seek_hole": false, 00:14:38.634 "seek_data": false, 00:14:38.634 "copy": false, 00:14:38.634 "nvme_iov_md": false 00:14:38.634 }, 00:14:38.634 "memory_domains": [ 00:14:38.634 { 00:14:38.634 "dma_device_id": "system", 00:14:38.634 "dma_device_type": 1 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.634 "dma_device_type": 2 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "dma_device_id": "system", 00:14:38.634 "dma_device_type": 1 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.634 "dma_device_type": 2 00:14:38.634 } 00:14:38.634 ], 00:14:38.634 "driver_specific": { 00:14:38.634 "raid": { 00:14:38.634 "uuid": "cc6b8e79-4964-41ed-a23a-29d2a94c36d5", 00:14:38.634 "strip_size_kb": 64, 00:14:38.634 "state": "online", 00:14:38.634 "raid_level": "raid0", 00:14:38.634 "superblock": true, 00:14:38.634 "num_base_bdevs": 2, 00:14:38.634 "num_base_bdevs_discovered": 2, 00:14:38.634 "num_base_bdevs_operational": 2, 00:14:38.634 "base_bdevs_list": [ 00:14:38.634 { 00:14:38.634 "name": "pt1", 00:14:38.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.634 "is_configured": true, 00:14:38.634 "data_offset": 2048, 00:14:38.634 "data_size": 63488 00:14:38.634 }, 00:14:38.634 { 00:14:38.634 "name": "pt2", 00:14:38.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.634 "is_configured": true, 00:14:38.634 "data_offset": 2048, 00:14:38.634 "data_size": 63488 00:14:38.634 } 00:14:38.634 ] 00:14:38.634 } 00:14:38.634 } 00:14:38.634 }' 00:14:38.634 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.634 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:38.634 pt2' 00:14:38.634 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:38.634 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:38.634 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:38.894 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:38.894 "name": "pt1", 00:14:38.894 "aliases": [ 00:14:38.894 "00000000-0000-0000-0000-000000000001" 00:14:38.894 ], 00:14:38.894 "product_name": "passthru", 00:14:38.894 "block_size": 512, 00:14:38.894 "num_blocks": 65536, 00:14:38.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:38.894 "assigned_rate_limits": { 00:14:38.894 "rw_ios_per_sec": 0, 00:14:38.894 "rw_mbytes_per_sec": 0, 00:14:38.894 "r_mbytes_per_sec": 0, 00:14:38.894 "w_mbytes_per_sec": 0 00:14:38.894 }, 00:14:38.894 "claimed": true, 00:14:38.894 "claim_type": "exclusive_write", 00:14:38.894 "zoned": false, 00:14:38.894 "supported_io_types": { 00:14:38.894 "read": true, 00:14:38.894 "write": true, 00:14:38.894 "unmap": true, 00:14:38.894 "flush": true, 00:14:38.894 "reset": true, 00:14:38.894 "nvme_admin": false, 00:14:38.894 "nvme_io": false, 00:14:38.894 "nvme_io_md": false, 00:14:38.894 "write_zeroes": true, 00:14:38.894 "zcopy": true, 00:14:38.894 "get_zone_info": false, 00:14:38.894 "zone_management": false, 00:14:38.894 "zone_append": false, 00:14:38.894 "compare": false, 00:14:38.894 "compare_and_write": false, 00:14:38.894 "abort": true, 00:14:38.894 "seek_hole": false, 00:14:38.894 "seek_data": false, 00:14:38.894 "copy": true, 00:14:38.894 "nvme_iov_md": false 00:14:38.894 }, 00:14:38.894 "memory_domains": [ 00:14:38.894 { 00:14:38.894 "dma_device_id": "system", 00:14:38.894 "dma_device_type": 1 00:14:38.894 }, 00:14:38.894 { 00:14:38.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.894 "dma_device_type": 2 00:14:38.894 } 00:14:38.894 ], 00:14:38.894 "driver_specific": { 00:14:38.894 "passthru": { 00:14:38.894 "name": "pt1", 00:14:38.894 "base_bdev_name": "malloc1" 00:14:38.894 } 00:14:38.894 } 00:14:38.894 }' 00:14:38.894 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.894 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:38.894 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:38.894 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:38.894 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:39.153 16:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.412 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.412 "name": "pt2", 00:14:39.412 "aliases": [ 00:14:39.412 "00000000-0000-0000-0000-000000000002" 00:14:39.412 ], 00:14:39.412 "product_name": "passthru", 00:14:39.412 "block_size": 512, 00:14:39.412 "num_blocks": 65536, 00:14:39.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.412 "assigned_rate_limits": { 00:14:39.412 "rw_ios_per_sec": 0, 00:14:39.412 "rw_mbytes_per_sec": 0, 00:14:39.412 "r_mbytes_per_sec": 0, 00:14:39.412 "w_mbytes_per_sec": 0 00:14:39.412 }, 00:14:39.412 "claimed": true, 00:14:39.412 "claim_type": "exclusive_write", 00:14:39.412 "zoned": false, 00:14:39.412 "supported_io_types": { 00:14:39.412 "read": true, 00:14:39.412 "write": true, 00:14:39.412 "unmap": true, 00:14:39.412 "flush": true, 00:14:39.412 "reset": true, 00:14:39.412 "nvme_admin": false, 00:14:39.412 "nvme_io": false, 00:14:39.412 "nvme_io_md": false, 00:14:39.412 "write_zeroes": true, 00:14:39.412 "zcopy": true, 00:14:39.412 "get_zone_info": false, 00:14:39.412 "zone_management": false, 00:14:39.412 "zone_append": false, 00:14:39.412 "compare": false, 00:14:39.412 "compare_and_write": false, 00:14:39.412 "abort": true, 00:14:39.412 "seek_hole": false, 00:14:39.412 "seek_data": false, 00:14:39.412 "copy": true, 00:14:39.412 "nvme_iov_md": false 00:14:39.412 }, 00:14:39.412 "memory_domains": [ 00:14:39.412 { 00:14:39.412 "dma_device_id": "system", 00:14:39.412 "dma_device_type": 1 00:14:39.412 }, 00:14:39.412 { 00:14:39.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.412 "dma_device_type": 2 00:14:39.412 } 00:14:39.412 ], 00:14:39.412 "driver_specific": { 00:14:39.412 "passthru": { 00:14:39.412 "name": "pt2", 00:14:39.412 "base_bdev_name": "malloc2" 00:14:39.412 } 00:14:39.413 } 00:14:39.413 }' 00:14:39.413 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.413 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.413 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.413 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:39.672 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:39.932 [2024-07-24 16:31:36.699930] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' cc6b8e79-4964-41ed-a23a-29d2a94c36d5 '!=' cc6b8e79-4964-41ed-a23a-29d2a94c36d5 ']' 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1599604 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1599604 ']' 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1599604 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1599604 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1599604' 00:14:39.932 killing process with pid 1599604 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1599604 00:14:39.932 [2024-07-24 16:31:36.778749] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.932 [2024-07-24 16:31:36.778846] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.932 16:31:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1599604 00:14:39.932 [2024-07-24 16:31:36.778907] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.932 [2024-07-24 16:31:36.778926] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:14:40.191 [2024-07-24 16:31:36.984199] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.098 16:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:42.098 00:14:42.098 real 0m11.932s 00:14:42.098 user 0m19.456s 00:14:42.098 sys 0m2.056s 00:14:42.098 16:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.098 16:31:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 ************************************ 00:14:42.098 END TEST raid_superblock_test 00:14:42.098 ************************************ 00:14:42.098 16:31:38 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:42.098 16:31:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:42.098 16:31:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:42.098 16:31:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 ************************************ 00:14:42.098 START TEST raid_read_error_test 00:14:42.098 ************************************ 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.v1NMV2EYpg 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1601933 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1601933 /var/tmp/spdk-raid.sock 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1601933 ']' 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:42.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.098 16:31:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.098 [2024-07-24 16:31:38.950576] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:42.098 [2024-07-24 16:31:38.950698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1601933 ] 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:42.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.357 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:42.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:42.358 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:42.358 [2024-07-24 16:31:39.177401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.617 [2024-07-24 16:31:39.440399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.185 [2024-07-24 16:31:39.783917] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.185 [2024-07-24 16:31:39.783953] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.185 16:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.185 16:31:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:43.185 16:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:43.185 16:31:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:43.444 BaseBdev1_malloc 00:14:43.444 16:31:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:43.703 true 00:14:43.703 16:31:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:43.962 [2024-07-24 16:31:40.713431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:43.962 [2024-07-24 16:31:40.713493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.962 [2024-07-24 16:31:40.713519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:14:43.962 [2024-07-24 16:31:40.713542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.962 [2024-07-24 16:31:40.716312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.962 [2024-07-24 16:31:40.716351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.962 BaseBdev1 00:14:43.962 16:31:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:43.962 16:31:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.222 BaseBdev2_malloc 00:14:44.222 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:44.481 true 00:14:44.481 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.741 [2024-07-24 16:31:41.446444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.741 [2024-07-24 16:31:41.446503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.741 [2024-07-24 16:31:41.446531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:14:44.741 [2024-07-24 16:31:41.446552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.741 [2024-07-24 16:31:41.449302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.741 [2024-07-24 16:31:41.449341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.741 BaseBdev2 00:14:44.741 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:45.000 [2024-07-24 16:31:41.667108] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.000 [2024-07-24 16:31:41.669494] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.000 [2024-07-24 16:31:41.669737] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:14:45.000 [2024-07-24 16:31:41.669762] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:45.000 [2024-07-24 16:31:41.670100] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:45.000 [2024-07-24 16:31:41.670351] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:14:45.000 [2024-07-24 16:31:41.670369] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:14:45.000 [2024-07-24 16:31:41.670578] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.000 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:45.000 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.001 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.259 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.259 "name": "raid_bdev1", 00:14:45.259 "uuid": "e60f595a-6477-4c3c-be35-b7fef996b6e9", 00:14:45.259 "strip_size_kb": 64, 00:14:45.259 "state": "online", 00:14:45.259 "raid_level": "raid0", 00:14:45.259 "superblock": true, 00:14:45.259 "num_base_bdevs": 2, 00:14:45.259 "num_base_bdevs_discovered": 2, 00:14:45.259 "num_base_bdevs_operational": 2, 00:14:45.259 "base_bdevs_list": [ 00:14:45.259 { 00:14:45.259 "name": "BaseBdev1", 00:14:45.259 "uuid": "ca555248-512d-52cd-807a-31e65b8d88f1", 00:14:45.259 "is_configured": true, 00:14:45.259 "data_offset": 2048, 00:14:45.259 "data_size": 63488 00:14:45.259 }, 00:14:45.259 { 00:14:45.259 "name": "BaseBdev2", 00:14:45.259 "uuid": "694b667b-d369-55a3-a5af-ff7aa9cfb281", 00:14:45.259 "is_configured": true, 00:14:45.259 "data_offset": 2048, 00:14:45.259 "data_size": 63488 00:14:45.259 } 00:14:45.259 ] 00:14:45.259 }' 00:14:45.259 16:31:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.259 16:31:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.828 16:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:45.828 16:31:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:45.828 [2024-07-24 16:31:42.591301] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:46.765 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.023 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.282 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.282 "name": "raid_bdev1", 00:14:47.282 "uuid": "e60f595a-6477-4c3c-be35-b7fef996b6e9", 00:14:47.282 "strip_size_kb": 64, 00:14:47.282 "state": "online", 00:14:47.282 "raid_level": "raid0", 00:14:47.282 "superblock": true, 00:14:47.282 "num_base_bdevs": 2, 00:14:47.282 "num_base_bdevs_discovered": 2, 00:14:47.282 "num_base_bdevs_operational": 2, 00:14:47.282 "base_bdevs_list": [ 00:14:47.282 { 00:14:47.282 "name": "BaseBdev1", 00:14:47.282 "uuid": "ca555248-512d-52cd-807a-31e65b8d88f1", 00:14:47.282 "is_configured": true, 00:14:47.282 "data_offset": 2048, 00:14:47.282 "data_size": 63488 00:14:47.282 }, 00:14:47.282 { 00:14:47.283 "name": "BaseBdev2", 00:14:47.283 "uuid": "694b667b-d369-55a3-a5af-ff7aa9cfb281", 00:14:47.283 "is_configured": true, 00:14:47.283 "data_offset": 2048, 00:14:47.283 "data_size": 63488 00:14:47.283 } 00:14:47.283 ] 00:14:47.283 }' 00:14:47.283 16:31:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.283 16:31:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.851 16:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:48.111 [2024-07-24 16:31:44.733981] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.111 [2024-07-24 16:31:44.734023] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.111 [2024-07-24 16:31:44.737306] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.111 [2024-07-24 16:31:44.737354] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.111 [2024-07-24 16:31:44.737392] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.111 [2024-07-24 16:31:44.737416] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:14:48.111 0 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1601933 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1601933 ']' 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1601933 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1601933 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1601933' 00:14:48.111 killing process with pid 1601933 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1601933 00:14:48.111 [2024-07-24 16:31:44.812186] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.111 16:31:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1601933 00:14:48.111 [2024-07-24 16:31:44.906484] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.v1NMV2EYpg 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:14:50.019 00:14:50.019 real 0m7.840s 00:14:50.019 user 0m10.922s 00:14:50.019 sys 0m1.232s 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.019 16:31:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.019 ************************************ 00:14:50.019 END TEST raid_read_error_test 00:14:50.019 ************************************ 00:14:50.019 16:31:46 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:50.019 16:31:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:50.019 16:31:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.019 16:31:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.019 ************************************ 00:14:50.019 START TEST raid_write_error_test 00:14:50.019 ************************************ 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ZD8bJuJHEc 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1603351 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1603351 /var/tmp/spdk-raid.sock 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1603351 ']' 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:50.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.019 16:31:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.279 [2024-07-24 16:31:46.884197] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:50.279 [2024-07-24 16:31:46.884322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603351 ] 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:50.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:50.279 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:50.279 [2024-07-24 16:31:47.110715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.539 [2024-07-24 16:31:47.369379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.107 [2024-07-24 16:31:47.681602] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.107 [2024-07-24 16:31:47.681638] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.107 16:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.107 16:31:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:51.107 16:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:51.107 16:31:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.400 BaseBdev1_malloc 00:14:51.400 16:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:51.691 true 00:14:51.691 16:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:51.950 [2024-07-24 16:31:48.554631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:51.950 [2024-07-24 16:31:48.554693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.950 [2024-07-24 16:31:48.554719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:14:51.950 [2024-07-24 16:31:48.554741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.950 [2024-07-24 16:31:48.557496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.950 [2024-07-24 16:31:48.557534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.950 BaseBdev1 00:14:51.950 16:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:51.950 16:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:52.209 BaseBdev2_malloc 00:14:52.209 16:31:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:52.209 true 00:14:52.469 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:52.469 [2024-07-24 16:31:49.286878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:52.469 [2024-07-24 16:31:49.286933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.469 [2024-07-24 16:31:49.286956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:14:52.469 [2024-07-24 16:31:49.286977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.469 [2024-07-24 16:31:49.289692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.469 [2024-07-24 16:31:49.289727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:52.469 BaseBdev2 00:14:52.469 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:52.728 [2024-07-24 16:31:49.515578] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.728 [2024-07-24 16:31:49.517898] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.728 [2024-07-24 16:31:49.518165] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:14:52.728 [2024-07-24 16:31:49.518188] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:52.728 [2024-07-24 16:31:49.518529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:52.728 [2024-07-24 16:31:49.518775] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:14:52.728 [2024-07-24 16:31:49.518793] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:14:52.728 [2024-07-24 16:31:49.519015] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.728 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.988 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.988 "name": "raid_bdev1", 00:14:52.988 "uuid": "228d540c-5a30-4f5c-9cb0-67e2b9188f51", 00:14:52.988 "strip_size_kb": 64, 00:14:52.988 "state": "online", 00:14:52.988 "raid_level": "raid0", 00:14:52.988 "superblock": true, 00:14:52.988 "num_base_bdevs": 2, 00:14:52.988 "num_base_bdevs_discovered": 2, 00:14:52.988 "num_base_bdevs_operational": 2, 00:14:52.988 "base_bdevs_list": [ 00:14:52.988 { 00:14:52.988 "name": "BaseBdev1", 00:14:52.988 "uuid": "2cdec0da-0a00-5b7a-973a-ca8eb705767f", 00:14:52.988 "is_configured": true, 00:14:52.988 "data_offset": 2048, 00:14:52.988 "data_size": 63488 00:14:52.988 }, 00:14:52.988 { 00:14:52.988 "name": "BaseBdev2", 00:14:52.988 "uuid": "2fa53604-7f67-5cd4-86fa-d64a2815ae5b", 00:14:52.988 "is_configured": true, 00:14:52.988 "data_offset": 2048, 00:14:52.988 "data_size": 63488 00:14:52.988 } 00:14:52.988 ] 00:14:52.988 }' 00:14:52.988 16:31:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.988 16:31:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.556 16:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:53.556 16:31:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:53.815 [2024-07-24 16:31:50.432028] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.754 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.013 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.013 "name": "raid_bdev1", 00:14:55.013 "uuid": "228d540c-5a30-4f5c-9cb0-67e2b9188f51", 00:14:55.013 "strip_size_kb": 64, 00:14:55.013 "state": "online", 00:14:55.013 "raid_level": "raid0", 00:14:55.013 "superblock": true, 00:14:55.013 "num_base_bdevs": 2, 00:14:55.013 "num_base_bdevs_discovered": 2, 00:14:55.013 "num_base_bdevs_operational": 2, 00:14:55.013 "base_bdevs_list": [ 00:14:55.013 { 00:14:55.013 "name": "BaseBdev1", 00:14:55.013 "uuid": "2cdec0da-0a00-5b7a-973a-ca8eb705767f", 00:14:55.013 "is_configured": true, 00:14:55.013 "data_offset": 2048, 00:14:55.013 "data_size": 63488 00:14:55.013 }, 00:14:55.013 { 00:14:55.013 "name": "BaseBdev2", 00:14:55.013 "uuid": "2fa53604-7f67-5cd4-86fa-d64a2815ae5b", 00:14:55.013 "is_configured": true, 00:14:55.013 "data_offset": 2048, 00:14:55.013 "data_size": 63488 00:14:55.013 } 00:14:55.013 ] 00:14:55.013 }' 00:14:55.014 16:31:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.014 16:31:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.582 16:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:55.841 [2024-07-24 16:31:52.564044] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.841 [2024-07-24 16:31:52.564090] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.841 [2024-07-24 16:31:52.567412] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.841 [2024-07-24 16:31:52.567466] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.841 [2024-07-24 16:31:52.567505] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.841 [2024-07-24 16:31:52.567529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:14:55.841 0 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1603351 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1603351 ']' 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1603351 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603351 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603351' 00:14:55.841 killing process with pid 1603351 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1603351 00:14:55.841 [2024-07-24 16:31:52.644783] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.841 16:31:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1603351 00:14:56.099 [2024-07-24 16:31:52.751429] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ZD8bJuJHEc 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:14:58.002 00:14:58.002 real 0m7.843s 00:14:58.002 user 0m10.943s 00:14:58.002 sys 0m1.193s 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.002 16:31:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.002 ************************************ 00:14:58.002 END TEST raid_write_error_test 00:14:58.002 ************************************ 00:14:58.002 16:31:54 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:58.002 16:31:54 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:58.002 16:31:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:58.002 16:31:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.002 16:31:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.002 ************************************ 00:14:58.002 START TEST raid_state_function_test 00:14:58.002 ************************************ 00:14:58.002 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:14:58.002 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:58.002 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:58.002 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:58.002 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:58.002 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1604770 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1604770' 00:14:58.003 Process raid pid: 1604770 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1604770 /var/tmp/spdk-raid.sock 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1604770 ']' 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.003 16:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:58.003 [2024-07-24 16:31:54.783929] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:14:58.003 [2024-07-24 16:31:54.784046] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.262 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:58.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.262 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:58.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.262 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:58.262 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.262 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:58.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:58.263 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:58.263 [2024-07-24 16:31:55.011296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.522 [2024-07-24 16:31:55.305075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.091 [2024-07-24 16:31:55.659030] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.091 [2024-07-24 16:31:55.659067] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.091 16:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.091 16:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:59.091 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.350 [2024-07-24 16:31:55.978723] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.350 [2024-07-24 16:31:55.978779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.350 [2024-07-24 16:31:55.978794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.350 [2024-07-24 16:31:55.978810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.350 16:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.608 16:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.608 "name": "Existed_Raid", 00:14:59.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.608 "strip_size_kb": 64, 00:14:59.608 "state": "configuring", 00:14:59.608 "raid_level": "concat", 00:14:59.608 "superblock": false, 00:14:59.608 "num_base_bdevs": 2, 00:14:59.608 "num_base_bdevs_discovered": 0, 00:14:59.608 "num_base_bdevs_operational": 2, 00:14:59.608 "base_bdevs_list": [ 00:14:59.608 { 00:14:59.608 "name": "BaseBdev1", 00:14:59.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.608 "is_configured": false, 00:14:59.608 "data_offset": 0, 00:14:59.608 "data_size": 0 00:14:59.608 }, 00:14:59.608 { 00:14:59.608 "name": "BaseBdev2", 00:14:59.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.608 "is_configured": false, 00:14:59.608 "data_offset": 0, 00:14:59.608 "data_size": 0 00:14:59.608 } 00:14:59.608 ] 00:14:59.608 }' 00:14:59.608 16:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.608 16:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.175 16:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.176 [2024-07-24 16:31:56.973252] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.176 [2024-07-24 16:31:56.973292] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:00.176 16:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.435 [2024-07-24 16:31:57.189881] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.435 [2024-07-24 16:31:57.189927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.435 [2024-07-24 16:31:57.189941] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.435 [2024-07-24 16:31:57.189958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.435 16:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.003 [2024-07-24 16:31:57.741977] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.003 BaseBdev1 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:01.003 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.262 16:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.867 [ 00:15:01.867 { 00:15:01.867 "name": "BaseBdev1", 00:15:01.867 "aliases": [ 00:15:01.867 "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca" 00:15:01.867 ], 00:15:01.867 "product_name": "Malloc disk", 00:15:01.867 "block_size": 512, 00:15:01.867 "num_blocks": 65536, 00:15:01.867 "uuid": "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca", 00:15:01.867 "assigned_rate_limits": { 00:15:01.867 "rw_ios_per_sec": 0, 00:15:01.867 "rw_mbytes_per_sec": 0, 00:15:01.867 "r_mbytes_per_sec": 0, 00:15:01.867 "w_mbytes_per_sec": 0 00:15:01.867 }, 00:15:01.867 "claimed": true, 00:15:01.867 "claim_type": "exclusive_write", 00:15:01.867 "zoned": false, 00:15:01.867 "supported_io_types": { 00:15:01.867 "read": true, 00:15:01.867 "write": true, 00:15:01.867 "unmap": true, 00:15:01.867 "flush": true, 00:15:01.867 "reset": true, 00:15:01.867 "nvme_admin": false, 00:15:01.867 "nvme_io": false, 00:15:01.867 "nvme_io_md": false, 00:15:01.867 "write_zeroes": true, 00:15:01.867 "zcopy": true, 00:15:01.867 "get_zone_info": false, 00:15:01.867 "zone_management": false, 00:15:01.867 "zone_append": false, 00:15:01.867 "compare": false, 00:15:01.867 "compare_and_write": false, 00:15:01.867 "abort": true, 00:15:01.867 "seek_hole": false, 00:15:01.867 "seek_data": false, 00:15:01.867 "copy": true, 00:15:01.867 "nvme_iov_md": false 00:15:01.867 }, 00:15:01.867 "memory_domains": [ 00:15:01.867 { 00:15:01.867 "dma_device_id": "system", 00:15:01.867 "dma_device_type": 1 00:15:01.867 }, 00:15:01.867 { 00:15:01.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.867 "dma_device_type": 2 00:15:01.867 } 00:15:01.867 ], 00:15:01.867 "driver_specific": {} 00:15:01.867 } 00:15:01.867 ] 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.867 "name": "Existed_Raid", 00:15:01.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.867 "strip_size_kb": 64, 00:15:01.867 "state": "configuring", 00:15:01.867 "raid_level": "concat", 00:15:01.867 "superblock": false, 00:15:01.867 "num_base_bdevs": 2, 00:15:01.867 "num_base_bdevs_discovered": 1, 00:15:01.867 "num_base_bdevs_operational": 2, 00:15:01.867 "base_bdevs_list": [ 00:15:01.867 { 00:15:01.867 "name": "BaseBdev1", 00:15:01.867 "uuid": "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca", 00:15:01.867 "is_configured": true, 00:15:01.867 "data_offset": 0, 00:15:01.867 "data_size": 65536 00:15:01.867 }, 00:15:01.867 { 00:15:01.867 "name": "BaseBdev2", 00:15:01.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.867 "is_configured": false, 00:15:01.867 "data_offset": 0, 00:15:01.867 "data_size": 0 00:15:01.867 } 00:15:01.867 ] 00:15:01.867 }' 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.867 16:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.436 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.695 [2024-07-24 16:31:59.494783] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.695 [2024-07-24 16:31:59.494837] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:02.695 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:02.954 [2024-07-24 16:31:59.667329] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.954 [2024-07-24 16:31:59.669616] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.954 [2024-07-24 16:31:59.669657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.954 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.213 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.213 "name": "Existed_Raid", 00:15:03.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.213 "strip_size_kb": 64, 00:15:03.213 "state": "configuring", 00:15:03.213 "raid_level": "concat", 00:15:03.213 "superblock": false, 00:15:03.213 "num_base_bdevs": 2, 00:15:03.213 "num_base_bdevs_discovered": 1, 00:15:03.213 "num_base_bdevs_operational": 2, 00:15:03.213 "base_bdevs_list": [ 00:15:03.213 { 00:15:03.213 "name": "BaseBdev1", 00:15:03.213 "uuid": "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca", 00:15:03.213 "is_configured": true, 00:15:03.213 "data_offset": 0, 00:15:03.213 "data_size": 65536 00:15:03.213 }, 00:15:03.213 { 00:15:03.213 "name": "BaseBdev2", 00:15:03.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.213 "is_configured": false, 00:15:03.213 "data_offset": 0, 00:15:03.213 "data_size": 0 00:15:03.213 } 00:15:03.213 ] 00:15:03.213 }' 00:15:03.213 16:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.213 16:31:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.779 16:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.037 [2024-07-24 16:32:00.751077] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.037 [2024-07-24 16:32:00.751125] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:04.037 [2024-07-24 16:32:00.751147] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:04.037 [2024-07-24 16:32:00.751481] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:04.037 [2024-07-24 16:32:00.751700] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:04.037 [2024-07-24 16:32:00.751718] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:04.037 [2024-07-24 16:32:00.752010] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.037 BaseBdev2 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.037 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.295 16:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.554 [ 00:15:04.554 { 00:15:04.554 "name": "BaseBdev2", 00:15:04.554 "aliases": [ 00:15:04.554 "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b" 00:15:04.554 ], 00:15:04.554 "product_name": "Malloc disk", 00:15:04.554 "block_size": 512, 00:15:04.554 "num_blocks": 65536, 00:15:04.554 "uuid": "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b", 00:15:04.554 "assigned_rate_limits": { 00:15:04.554 "rw_ios_per_sec": 0, 00:15:04.554 "rw_mbytes_per_sec": 0, 00:15:04.554 "r_mbytes_per_sec": 0, 00:15:04.554 "w_mbytes_per_sec": 0 00:15:04.554 }, 00:15:04.554 "claimed": true, 00:15:04.554 "claim_type": "exclusive_write", 00:15:04.554 "zoned": false, 00:15:04.554 "supported_io_types": { 00:15:04.554 "read": true, 00:15:04.554 "write": true, 00:15:04.554 "unmap": true, 00:15:04.554 "flush": true, 00:15:04.554 "reset": true, 00:15:04.554 "nvme_admin": false, 00:15:04.554 "nvme_io": false, 00:15:04.554 "nvme_io_md": false, 00:15:04.554 "write_zeroes": true, 00:15:04.554 "zcopy": true, 00:15:04.554 "get_zone_info": false, 00:15:04.554 "zone_management": false, 00:15:04.554 "zone_append": false, 00:15:04.554 "compare": false, 00:15:04.554 "compare_and_write": false, 00:15:04.554 "abort": true, 00:15:04.554 "seek_hole": false, 00:15:04.554 "seek_data": false, 00:15:04.554 "copy": true, 00:15:04.554 "nvme_iov_md": false 00:15:04.554 }, 00:15:04.554 "memory_domains": [ 00:15:04.554 { 00:15:04.554 "dma_device_id": "system", 00:15:04.554 "dma_device_type": 1 00:15:04.554 }, 00:15:04.554 { 00:15:04.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.554 "dma_device_type": 2 00:15:04.554 } 00:15:04.554 ], 00:15:04.554 "driver_specific": {} 00:15:04.554 } 00:15:04.554 ] 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.554 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.813 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:04.813 "name": "Existed_Raid", 00:15:04.813 "uuid": "3cec3db5-d269-42b9-82c0-721476a9fe46", 00:15:04.813 "strip_size_kb": 64, 00:15:04.813 "state": "online", 00:15:04.813 "raid_level": "concat", 00:15:04.813 "superblock": false, 00:15:04.813 "num_base_bdevs": 2, 00:15:04.813 "num_base_bdevs_discovered": 2, 00:15:04.813 "num_base_bdevs_operational": 2, 00:15:04.813 "base_bdevs_list": [ 00:15:04.813 { 00:15:04.813 "name": "BaseBdev1", 00:15:04.813 "uuid": "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca", 00:15:04.813 "is_configured": true, 00:15:04.813 "data_offset": 0, 00:15:04.813 "data_size": 65536 00:15:04.813 }, 00:15:04.813 { 00:15:04.813 "name": "BaseBdev2", 00:15:04.814 "uuid": "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b", 00:15:04.814 "is_configured": true, 00:15:04.814 "data_offset": 0, 00:15:04.814 "data_size": 65536 00:15:04.814 } 00:15:04.814 ] 00:15:04.814 }' 00:15:04.814 16:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:04.814 16:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:05.389 [2024-07-24 16:32:02.219443] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:05.389 "name": "Existed_Raid", 00:15:05.389 "aliases": [ 00:15:05.389 "3cec3db5-d269-42b9-82c0-721476a9fe46" 00:15:05.389 ], 00:15:05.389 "product_name": "Raid Volume", 00:15:05.389 "block_size": 512, 00:15:05.389 "num_blocks": 131072, 00:15:05.389 "uuid": "3cec3db5-d269-42b9-82c0-721476a9fe46", 00:15:05.389 "assigned_rate_limits": { 00:15:05.389 "rw_ios_per_sec": 0, 00:15:05.389 "rw_mbytes_per_sec": 0, 00:15:05.389 "r_mbytes_per_sec": 0, 00:15:05.389 "w_mbytes_per_sec": 0 00:15:05.389 }, 00:15:05.389 "claimed": false, 00:15:05.389 "zoned": false, 00:15:05.389 "supported_io_types": { 00:15:05.389 "read": true, 00:15:05.389 "write": true, 00:15:05.389 "unmap": true, 00:15:05.389 "flush": true, 00:15:05.389 "reset": true, 00:15:05.389 "nvme_admin": false, 00:15:05.389 "nvme_io": false, 00:15:05.389 "nvme_io_md": false, 00:15:05.389 "write_zeroes": true, 00:15:05.389 "zcopy": false, 00:15:05.389 "get_zone_info": false, 00:15:05.389 "zone_management": false, 00:15:05.389 "zone_append": false, 00:15:05.389 "compare": false, 00:15:05.389 "compare_and_write": false, 00:15:05.389 "abort": false, 00:15:05.389 "seek_hole": false, 00:15:05.389 "seek_data": false, 00:15:05.389 "copy": false, 00:15:05.389 "nvme_iov_md": false 00:15:05.389 }, 00:15:05.389 "memory_domains": [ 00:15:05.389 { 00:15:05.389 "dma_device_id": "system", 00:15:05.389 "dma_device_type": 1 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.389 "dma_device_type": 2 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "dma_device_id": "system", 00:15:05.389 "dma_device_type": 1 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.389 "dma_device_type": 2 00:15:05.389 } 00:15:05.389 ], 00:15:05.389 "driver_specific": { 00:15:05.389 "raid": { 00:15:05.389 "uuid": "3cec3db5-d269-42b9-82c0-721476a9fe46", 00:15:05.389 "strip_size_kb": 64, 00:15:05.389 "state": "online", 00:15:05.389 "raid_level": "concat", 00:15:05.389 "superblock": false, 00:15:05.389 "num_base_bdevs": 2, 00:15:05.389 "num_base_bdevs_discovered": 2, 00:15:05.389 "num_base_bdevs_operational": 2, 00:15:05.389 "base_bdevs_list": [ 00:15:05.389 { 00:15:05.389 "name": "BaseBdev1", 00:15:05.389 "uuid": "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca", 00:15:05.389 "is_configured": true, 00:15:05.389 "data_offset": 0, 00:15:05.389 "data_size": 65536 00:15:05.389 }, 00:15:05.389 { 00:15:05.389 "name": "BaseBdev2", 00:15:05.389 "uuid": "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b", 00:15:05.389 "is_configured": true, 00:15:05.389 "data_offset": 0, 00:15:05.389 "data_size": 65536 00:15:05.389 } 00:15:05.389 ] 00:15:05.389 } 00:15:05.389 } 00:15:05.389 }' 00:15:05.389 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.648 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:05.648 BaseBdev2' 00:15:05.648 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:05.648 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:05.648 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:05.648 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.648 "name": "BaseBdev1", 00:15:05.648 "aliases": [ 00:15:05.648 "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca" 00:15:05.648 ], 00:15:05.648 "product_name": "Malloc disk", 00:15:05.648 "block_size": 512, 00:15:05.648 "num_blocks": 65536, 00:15:05.648 "uuid": "7640cdc7-d37e-49c1-9fa0-8abae2ebd9ca", 00:15:05.648 "assigned_rate_limits": { 00:15:05.648 "rw_ios_per_sec": 0, 00:15:05.648 "rw_mbytes_per_sec": 0, 00:15:05.648 "r_mbytes_per_sec": 0, 00:15:05.648 "w_mbytes_per_sec": 0 00:15:05.648 }, 00:15:05.648 "claimed": true, 00:15:05.648 "claim_type": "exclusive_write", 00:15:05.648 "zoned": false, 00:15:05.648 "supported_io_types": { 00:15:05.648 "read": true, 00:15:05.648 "write": true, 00:15:05.648 "unmap": true, 00:15:05.648 "flush": true, 00:15:05.648 "reset": true, 00:15:05.648 "nvme_admin": false, 00:15:05.648 "nvme_io": false, 00:15:05.648 "nvme_io_md": false, 00:15:05.648 "write_zeroes": true, 00:15:05.648 "zcopy": true, 00:15:05.648 "get_zone_info": false, 00:15:05.648 "zone_management": false, 00:15:05.648 "zone_append": false, 00:15:05.648 "compare": false, 00:15:05.648 "compare_and_write": false, 00:15:05.648 "abort": true, 00:15:05.648 "seek_hole": false, 00:15:05.648 "seek_data": false, 00:15:05.648 "copy": true, 00:15:05.648 "nvme_iov_md": false 00:15:05.648 }, 00:15:05.648 "memory_domains": [ 00:15:05.648 { 00:15:05.648 "dma_device_id": "system", 00:15:05.648 "dma_device_type": 1 00:15:05.648 }, 00:15:05.648 { 00:15:05.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.648 "dma_device_type": 2 00:15:05.648 } 00:15:05.648 ], 00:15:05.648 "driver_specific": {} 00:15:05.648 }' 00:15:05.648 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.907 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.165 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.165 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.165 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.165 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.165 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:06.165 16:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.424 "name": "BaseBdev2", 00:15:06.424 "aliases": [ 00:15:06.424 "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b" 00:15:06.424 ], 00:15:06.424 "product_name": "Malloc disk", 00:15:06.424 "block_size": 512, 00:15:06.424 "num_blocks": 65536, 00:15:06.424 "uuid": "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b", 00:15:06.424 "assigned_rate_limits": { 00:15:06.424 "rw_ios_per_sec": 0, 00:15:06.424 "rw_mbytes_per_sec": 0, 00:15:06.424 "r_mbytes_per_sec": 0, 00:15:06.424 "w_mbytes_per_sec": 0 00:15:06.424 }, 00:15:06.424 "claimed": true, 00:15:06.424 "claim_type": "exclusive_write", 00:15:06.424 "zoned": false, 00:15:06.424 "supported_io_types": { 00:15:06.424 "read": true, 00:15:06.424 "write": true, 00:15:06.424 "unmap": true, 00:15:06.424 "flush": true, 00:15:06.424 "reset": true, 00:15:06.424 "nvme_admin": false, 00:15:06.424 "nvme_io": false, 00:15:06.424 "nvme_io_md": false, 00:15:06.424 "write_zeroes": true, 00:15:06.424 "zcopy": true, 00:15:06.424 "get_zone_info": false, 00:15:06.424 "zone_management": false, 00:15:06.424 "zone_append": false, 00:15:06.424 "compare": false, 00:15:06.424 "compare_and_write": false, 00:15:06.424 "abort": true, 00:15:06.424 "seek_hole": false, 00:15:06.424 "seek_data": false, 00:15:06.424 "copy": true, 00:15:06.424 "nvme_iov_md": false 00:15:06.424 }, 00:15:06.424 "memory_domains": [ 00:15:06.424 { 00:15:06.424 "dma_device_id": "system", 00:15:06.424 "dma_device_type": 1 00:15:06.424 }, 00:15:06.424 { 00:15:06.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.424 "dma_device_type": 2 00:15:06.424 } 00:15:06.424 ], 00:15:06.424 "driver_specific": {} 00:15:06.424 }' 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.424 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.683 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.683 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.683 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.683 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.683 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.683 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:06.942 [2024-07-24 16:32:03.651032] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.942 [2024-07-24 16:32:03.651069] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.942 [2024-07-24 16:32:03.651126] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.942 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.201 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.201 "name": "Existed_Raid", 00:15:07.201 "uuid": "3cec3db5-d269-42b9-82c0-721476a9fe46", 00:15:07.201 "strip_size_kb": 64, 00:15:07.201 "state": "offline", 00:15:07.201 "raid_level": "concat", 00:15:07.201 "superblock": false, 00:15:07.201 "num_base_bdevs": 2, 00:15:07.201 "num_base_bdevs_discovered": 1, 00:15:07.201 "num_base_bdevs_operational": 1, 00:15:07.201 "base_bdevs_list": [ 00:15:07.201 { 00:15:07.201 "name": null, 00:15:07.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.201 "is_configured": false, 00:15:07.201 "data_offset": 0, 00:15:07.201 "data_size": 65536 00:15:07.201 }, 00:15:07.201 { 00:15:07.201 "name": "BaseBdev2", 00:15:07.201 "uuid": "25b95ec4-b7b2-405d-aa2c-429f8e4ce02b", 00:15:07.201 "is_configured": true, 00:15:07.201 "data_offset": 0, 00:15:07.201 "data_size": 65536 00:15:07.201 } 00:15:07.201 ] 00:15:07.201 }' 00:15:07.201 16:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.201 16:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.771 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:07.771 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:07.771 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.771 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:08.030 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:08.030 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.030 16:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:08.288 [2024-07-24 16:32:04.960620] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.288 [2024-07-24 16:32:04.960679] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:15:08.288 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:08.288 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:08.288 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.288 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1604770 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1604770 ']' 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1604770 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.547 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1604770 00:15:08.807 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.807 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.807 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1604770' 00:15:08.807 killing process with pid 1604770 00:15:08.807 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1604770 00:15:08.807 [2024-07-24 16:32:05.411300] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.807 16:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1604770 00:15:08.807 [2024-07-24 16:32:05.435711] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:10.715 00:15:10.715 real 0m12.410s 00:15:10.715 user 0m20.298s 00:15:10.715 sys 0m2.129s 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.715 ************************************ 00:15:10.715 END TEST raid_state_function_test 00:15:10.715 ************************************ 00:15:10.715 16:32:07 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:10.715 16:32:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:10.715 16:32:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.715 16:32:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.715 ************************************ 00:15:10.715 START TEST raid_state_function_test_sb 00:15:10.715 ************************************ 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1607110 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1607110' 00:15:10.715 Process raid pid: 1607110 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1607110 /var/tmp/spdk-raid.sock 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1607110 ']' 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:10.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.715 16:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.715 [2024-07-24 16:32:07.281184] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:15:10.715 [2024-07-24 16:32:07.281304] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:10.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.715 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:10.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:10.716 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:10.716 [2024-07-24 16:32:07.510894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.975 [2024-07-24 16:32:07.782497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.544 [2024-07-24 16:32:08.126400] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.544 [2024-07-24 16:32:08.126436] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.544 16:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:11.544 16:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:11.544 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:11.803 [2024-07-24 16:32:08.513822] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.803 [2024-07-24 16:32:08.513875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.803 [2024-07-24 16:32:08.513890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.803 [2024-07-24 16:32:08.513907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.803 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.061 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.061 "name": "Existed_Raid", 00:15:12.061 "uuid": "231e020d-b62d-449e-b015-0176cdd6d555", 00:15:12.061 "strip_size_kb": 64, 00:15:12.061 "state": "configuring", 00:15:12.061 "raid_level": "concat", 00:15:12.061 "superblock": true, 00:15:12.061 "num_base_bdevs": 2, 00:15:12.061 "num_base_bdevs_discovered": 0, 00:15:12.061 "num_base_bdevs_operational": 2, 00:15:12.061 "base_bdevs_list": [ 00:15:12.061 { 00:15:12.061 "name": "BaseBdev1", 00:15:12.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.061 "is_configured": false, 00:15:12.061 "data_offset": 0, 00:15:12.061 "data_size": 0 00:15:12.061 }, 00:15:12.061 { 00:15:12.061 "name": "BaseBdev2", 00:15:12.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.061 "is_configured": false, 00:15:12.061 "data_offset": 0, 00:15:12.061 "data_size": 0 00:15:12.061 } 00:15:12.061 ] 00:15:12.061 }' 00:15:12.061 16:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.061 16:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.629 16:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.888 [2024-07-24 16:32:09.528379] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.888 [2024-07-24 16:32:09.528421] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:12.888 16:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:13.146 [2024-07-24 16:32:09.757038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:13.146 [2024-07-24 16:32:09.757083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:13.146 [2024-07-24 16:32:09.757096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.146 [2024-07-24 16:32:09.757112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.146 16:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:13.406 [2024-07-24 16:32:10.037482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.406 BaseBdev1 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:13.406 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:13.665 [ 00:15:13.665 { 00:15:13.665 "name": "BaseBdev1", 00:15:13.665 "aliases": [ 00:15:13.665 "89d132d0-12b3-4170-a1a0-7f0a94c5a812" 00:15:13.665 ], 00:15:13.665 "product_name": "Malloc disk", 00:15:13.665 "block_size": 512, 00:15:13.665 "num_blocks": 65536, 00:15:13.665 "uuid": "89d132d0-12b3-4170-a1a0-7f0a94c5a812", 00:15:13.665 "assigned_rate_limits": { 00:15:13.665 "rw_ios_per_sec": 0, 00:15:13.665 "rw_mbytes_per_sec": 0, 00:15:13.665 "r_mbytes_per_sec": 0, 00:15:13.665 "w_mbytes_per_sec": 0 00:15:13.665 }, 00:15:13.665 "claimed": true, 00:15:13.665 "claim_type": "exclusive_write", 00:15:13.665 "zoned": false, 00:15:13.665 "supported_io_types": { 00:15:13.665 "read": true, 00:15:13.665 "write": true, 00:15:13.665 "unmap": true, 00:15:13.665 "flush": true, 00:15:13.665 "reset": true, 00:15:13.665 "nvme_admin": false, 00:15:13.665 "nvme_io": false, 00:15:13.665 "nvme_io_md": false, 00:15:13.665 "write_zeroes": true, 00:15:13.665 "zcopy": true, 00:15:13.665 "get_zone_info": false, 00:15:13.665 "zone_management": false, 00:15:13.665 "zone_append": false, 00:15:13.665 "compare": false, 00:15:13.665 "compare_and_write": false, 00:15:13.665 "abort": true, 00:15:13.665 "seek_hole": false, 00:15:13.665 "seek_data": false, 00:15:13.665 "copy": true, 00:15:13.665 "nvme_iov_md": false 00:15:13.665 }, 00:15:13.665 "memory_domains": [ 00:15:13.665 { 00:15:13.665 "dma_device_id": "system", 00:15:13.665 "dma_device_type": 1 00:15:13.665 }, 00:15:13.665 { 00:15:13.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.665 "dma_device_type": 2 00:15:13.665 } 00:15:13.665 ], 00:15:13.665 "driver_specific": {} 00:15:13.665 } 00:15:13.665 ] 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.665 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.924 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.924 "name": "Existed_Raid", 00:15:13.924 "uuid": "97dc7e13-d544-4dea-bb22-70d7645d6366", 00:15:13.924 "strip_size_kb": 64, 00:15:13.924 "state": "configuring", 00:15:13.924 "raid_level": "concat", 00:15:13.924 "superblock": true, 00:15:13.924 "num_base_bdevs": 2, 00:15:13.924 "num_base_bdevs_discovered": 1, 00:15:13.924 "num_base_bdevs_operational": 2, 00:15:13.924 "base_bdevs_list": [ 00:15:13.924 { 00:15:13.924 "name": "BaseBdev1", 00:15:13.924 "uuid": "89d132d0-12b3-4170-a1a0-7f0a94c5a812", 00:15:13.924 "is_configured": true, 00:15:13.924 "data_offset": 2048, 00:15:13.924 "data_size": 63488 00:15:13.924 }, 00:15:13.924 { 00:15:13.924 "name": "BaseBdev2", 00:15:13.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.924 "is_configured": false, 00:15:13.924 "data_offset": 0, 00:15:13.924 "data_size": 0 00:15:13.924 } 00:15:13.924 ] 00:15:13.924 }' 00:15:13.924 16:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.924 16:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.492 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.751 [2024-07-24 16:32:11.433290] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.751 [2024-07-24 16:32:11.433345] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:14.751 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:15.009 [2024-07-24 16:32:11.662006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.009 [2024-07-24 16:32:11.664329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.009 [2024-07-24 16:32:11.664373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.009 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.268 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.268 "name": "Existed_Raid", 00:15:15.268 "uuid": "62fa42ff-26f1-4da9-97f5-5fdf610f79e9", 00:15:15.268 "strip_size_kb": 64, 00:15:15.268 "state": "configuring", 00:15:15.268 "raid_level": "concat", 00:15:15.268 "superblock": true, 00:15:15.268 "num_base_bdevs": 2, 00:15:15.268 "num_base_bdevs_discovered": 1, 00:15:15.268 "num_base_bdevs_operational": 2, 00:15:15.268 "base_bdevs_list": [ 00:15:15.268 { 00:15:15.268 "name": "BaseBdev1", 00:15:15.268 "uuid": "89d132d0-12b3-4170-a1a0-7f0a94c5a812", 00:15:15.268 "is_configured": true, 00:15:15.268 "data_offset": 2048, 00:15:15.268 "data_size": 63488 00:15:15.268 }, 00:15:15.268 { 00:15:15.268 "name": "BaseBdev2", 00:15:15.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.268 "is_configured": false, 00:15:15.268 "data_offset": 0, 00:15:15.268 "data_size": 0 00:15:15.268 } 00:15:15.268 ] 00:15:15.268 }' 00:15:15.268 16:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.268 16:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.835 16:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:16.095 [2024-07-24 16:32:12.738157] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.095 [2024-07-24 16:32:12.738413] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:16.095 [2024-07-24 16:32:12.738436] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.095 [2024-07-24 16:32:12.738758] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:16.095 [2024-07-24 16:32:12.738969] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:16.095 [2024-07-24 16:32:12.738987] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:16.095 [2024-07-24 16:32:12.739174] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.095 BaseBdev2 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:16.095 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.355 16:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.355 [ 00:15:16.355 { 00:15:16.355 "name": "BaseBdev2", 00:15:16.355 "aliases": [ 00:15:16.355 "0cdcba77-1207-4bd5-abc4-44e2d931e711" 00:15:16.355 ], 00:15:16.355 "product_name": "Malloc disk", 00:15:16.355 "block_size": 512, 00:15:16.355 "num_blocks": 65536, 00:15:16.355 "uuid": "0cdcba77-1207-4bd5-abc4-44e2d931e711", 00:15:16.355 "assigned_rate_limits": { 00:15:16.355 "rw_ios_per_sec": 0, 00:15:16.355 "rw_mbytes_per_sec": 0, 00:15:16.355 "r_mbytes_per_sec": 0, 00:15:16.355 "w_mbytes_per_sec": 0 00:15:16.355 }, 00:15:16.355 "claimed": true, 00:15:16.355 "claim_type": "exclusive_write", 00:15:16.355 "zoned": false, 00:15:16.355 "supported_io_types": { 00:15:16.355 "read": true, 00:15:16.355 "write": true, 00:15:16.355 "unmap": true, 00:15:16.355 "flush": true, 00:15:16.355 "reset": true, 00:15:16.355 "nvme_admin": false, 00:15:16.355 "nvme_io": false, 00:15:16.355 "nvme_io_md": false, 00:15:16.355 "write_zeroes": true, 00:15:16.355 "zcopy": true, 00:15:16.355 "get_zone_info": false, 00:15:16.355 "zone_management": false, 00:15:16.355 "zone_append": false, 00:15:16.355 "compare": false, 00:15:16.355 "compare_and_write": false, 00:15:16.355 "abort": true, 00:15:16.355 "seek_hole": false, 00:15:16.355 "seek_data": false, 00:15:16.355 "copy": true, 00:15:16.355 "nvme_iov_md": false 00:15:16.355 }, 00:15:16.355 "memory_domains": [ 00:15:16.355 { 00:15:16.355 "dma_device_id": "system", 00:15:16.355 "dma_device_type": 1 00:15:16.355 }, 00:15:16.355 { 00:15:16.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.355 "dma_device_type": 2 00:15:16.355 } 00:15:16.355 ], 00:15:16.355 "driver_specific": {} 00:15:16.355 } 00:15:16.355 ] 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.355 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.614 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.614 "name": "Existed_Raid", 00:15:16.614 "uuid": "62fa42ff-26f1-4da9-97f5-5fdf610f79e9", 00:15:16.614 "strip_size_kb": 64, 00:15:16.614 "state": "online", 00:15:16.614 "raid_level": "concat", 00:15:16.614 "superblock": true, 00:15:16.614 "num_base_bdevs": 2, 00:15:16.614 "num_base_bdevs_discovered": 2, 00:15:16.614 "num_base_bdevs_operational": 2, 00:15:16.614 "base_bdevs_list": [ 00:15:16.614 { 00:15:16.614 "name": "BaseBdev1", 00:15:16.614 "uuid": "89d132d0-12b3-4170-a1a0-7f0a94c5a812", 00:15:16.614 "is_configured": true, 00:15:16.614 "data_offset": 2048, 00:15:16.614 "data_size": 63488 00:15:16.614 }, 00:15:16.614 { 00:15:16.614 "name": "BaseBdev2", 00:15:16.614 "uuid": "0cdcba77-1207-4bd5-abc4-44e2d931e711", 00:15:16.614 "is_configured": true, 00:15:16.614 "data_offset": 2048, 00:15:16.614 "data_size": 63488 00:15:16.614 } 00:15:16.615 ] 00:15:16.615 }' 00:15:16.615 16:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.615 16:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:17.183 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:17.441 [2024-07-24 16:32:14.238761] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.441 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:17.441 "name": "Existed_Raid", 00:15:17.441 "aliases": [ 00:15:17.441 "62fa42ff-26f1-4da9-97f5-5fdf610f79e9" 00:15:17.441 ], 00:15:17.441 "product_name": "Raid Volume", 00:15:17.441 "block_size": 512, 00:15:17.441 "num_blocks": 126976, 00:15:17.441 "uuid": "62fa42ff-26f1-4da9-97f5-5fdf610f79e9", 00:15:17.441 "assigned_rate_limits": { 00:15:17.441 "rw_ios_per_sec": 0, 00:15:17.441 "rw_mbytes_per_sec": 0, 00:15:17.441 "r_mbytes_per_sec": 0, 00:15:17.441 "w_mbytes_per_sec": 0 00:15:17.441 }, 00:15:17.441 "claimed": false, 00:15:17.441 "zoned": false, 00:15:17.441 "supported_io_types": { 00:15:17.441 "read": true, 00:15:17.441 "write": true, 00:15:17.441 "unmap": true, 00:15:17.441 "flush": true, 00:15:17.441 "reset": true, 00:15:17.441 "nvme_admin": false, 00:15:17.441 "nvme_io": false, 00:15:17.441 "nvme_io_md": false, 00:15:17.441 "write_zeroes": true, 00:15:17.441 "zcopy": false, 00:15:17.441 "get_zone_info": false, 00:15:17.441 "zone_management": false, 00:15:17.441 "zone_append": false, 00:15:17.441 "compare": false, 00:15:17.441 "compare_and_write": false, 00:15:17.441 "abort": false, 00:15:17.441 "seek_hole": false, 00:15:17.441 "seek_data": false, 00:15:17.441 "copy": false, 00:15:17.441 "nvme_iov_md": false 00:15:17.441 }, 00:15:17.441 "memory_domains": [ 00:15:17.441 { 00:15:17.441 "dma_device_id": "system", 00:15:17.441 "dma_device_type": 1 00:15:17.441 }, 00:15:17.441 { 00:15:17.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.441 "dma_device_type": 2 00:15:17.441 }, 00:15:17.441 { 00:15:17.441 "dma_device_id": "system", 00:15:17.441 "dma_device_type": 1 00:15:17.441 }, 00:15:17.441 { 00:15:17.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.441 "dma_device_type": 2 00:15:17.441 } 00:15:17.441 ], 00:15:17.442 "driver_specific": { 00:15:17.442 "raid": { 00:15:17.442 "uuid": "62fa42ff-26f1-4da9-97f5-5fdf610f79e9", 00:15:17.442 "strip_size_kb": 64, 00:15:17.442 "state": "online", 00:15:17.442 "raid_level": "concat", 00:15:17.442 "superblock": true, 00:15:17.442 "num_base_bdevs": 2, 00:15:17.442 "num_base_bdevs_discovered": 2, 00:15:17.442 "num_base_bdevs_operational": 2, 00:15:17.442 "base_bdevs_list": [ 00:15:17.442 { 00:15:17.442 "name": "BaseBdev1", 00:15:17.442 "uuid": "89d132d0-12b3-4170-a1a0-7f0a94c5a812", 00:15:17.442 "is_configured": true, 00:15:17.442 "data_offset": 2048, 00:15:17.442 "data_size": 63488 00:15:17.442 }, 00:15:17.442 { 00:15:17.442 "name": "BaseBdev2", 00:15:17.442 "uuid": "0cdcba77-1207-4bd5-abc4-44e2d931e711", 00:15:17.442 "is_configured": true, 00:15:17.442 "data_offset": 2048, 00:15:17.442 "data_size": 63488 00:15:17.442 } 00:15:17.442 ] 00:15:17.442 } 00:15:17.442 } 00:15:17.442 }' 00:15:17.442 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.701 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:17.701 BaseBdev2' 00:15:17.701 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:17.701 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:17.701 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:17.701 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:17.701 "name": "BaseBdev1", 00:15:17.701 "aliases": [ 00:15:17.701 "89d132d0-12b3-4170-a1a0-7f0a94c5a812" 00:15:17.701 ], 00:15:17.701 "product_name": "Malloc disk", 00:15:17.701 "block_size": 512, 00:15:17.701 "num_blocks": 65536, 00:15:17.701 "uuid": "89d132d0-12b3-4170-a1a0-7f0a94c5a812", 00:15:17.701 "assigned_rate_limits": { 00:15:17.701 "rw_ios_per_sec": 0, 00:15:17.701 "rw_mbytes_per_sec": 0, 00:15:17.701 "r_mbytes_per_sec": 0, 00:15:17.701 "w_mbytes_per_sec": 0 00:15:17.701 }, 00:15:17.701 "claimed": true, 00:15:17.701 "claim_type": "exclusive_write", 00:15:17.701 "zoned": false, 00:15:17.701 "supported_io_types": { 00:15:17.701 "read": true, 00:15:17.701 "write": true, 00:15:17.701 "unmap": true, 00:15:17.701 "flush": true, 00:15:17.701 "reset": true, 00:15:17.701 "nvme_admin": false, 00:15:17.701 "nvme_io": false, 00:15:17.701 "nvme_io_md": false, 00:15:17.701 "write_zeroes": true, 00:15:17.701 "zcopy": true, 00:15:17.701 "get_zone_info": false, 00:15:17.701 "zone_management": false, 00:15:17.701 "zone_append": false, 00:15:17.701 "compare": false, 00:15:17.701 "compare_and_write": false, 00:15:17.701 "abort": true, 00:15:17.701 "seek_hole": false, 00:15:17.701 "seek_data": false, 00:15:17.701 "copy": true, 00:15:17.701 "nvme_iov_md": false 00:15:17.701 }, 00:15:17.701 "memory_domains": [ 00:15:17.701 { 00:15:17.701 "dma_device_id": "system", 00:15:17.701 "dma_device_type": 1 00:15:17.701 }, 00:15:17.701 { 00:15:17.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.701 "dma_device_type": 2 00:15:17.701 } 00:15:17.701 ], 00:15:17.701 "driver_specific": {} 00:15:17.701 }' 00:15:17.701 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:17.960 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.231 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.231 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.231 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:18.231 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:18.231 16:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:18.543 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:18.543 "name": "BaseBdev2", 00:15:18.543 "aliases": [ 00:15:18.543 "0cdcba77-1207-4bd5-abc4-44e2d931e711" 00:15:18.543 ], 00:15:18.543 "product_name": "Malloc disk", 00:15:18.543 "block_size": 512, 00:15:18.543 "num_blocks": 65536, 00:15:18.543 "uuid": "0cdcba77-1207-4bd5-abc4-44e2d931e711", 00:15:18.543 "assigned_rate_limits": { 00:15:18.543 "rw_ios_per_sec": 0, 00:15:18.543 "rw_mbytes_per_sec": 0, 00:15:18.543 "r_mbytes_per_sec": 0, 00:15:18.543 "w_mbytes_per_sec": 0 00:15:18.543 }, 00:15:18.543 "claimed": true, 00:15:18.543 "claim_type": "exclusive_write", 00:15:18.543 "zoned": false, 00:15:18.543 "supported_io_types": { 00:15:18.543 "read": true, 00:15:18.543 "write": true, 00:15:18.543 "unmap": true, 00:15:18.543 "flush": true, 00:15:18.543 "reset": true, 00:15:18.543 "nvme_admin": false, 00:15:18.543 "nvme_io": false, 00:15:18.543 "nvme_io_md": false, 00:15:18.543 "write_zeroes": true, 00:15:18.543 "zcopy": true, 00:15:18.543 "get_zone_info": false, 00:15:18.543 "zone_management": false, 00:15:18.543 "zone_append": false, 00:15:18.543 "compare": false, 00:15:18.543 "compare_and_write": false, 00:15:18.543 "abort": true, 00:15:18.543 "seek_hole": false, 00:15:18.543 "seek_data": false, 00:15:18.543 "copy": true, 00:15:18.543 "nvme_iov_md": false 00:15:18.543 }, 00:15:18.543 "memory_domains": [ 00:15:18.543 { 00:15:18.543 "dma_device_id": "system", 00:15:18.543 "dma_device_type": 1 00:15:18.543 }, 00:15:18.543 { 00:15:18.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.543 "dma_device_type": 2 00:15:18.543 } 00:15:18.543 ], 00:15:18.543 "driver_specific": {} 00:15:18.543 }' 00:15:18.543 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.543 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:18.543 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.544 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:18.803 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:18.803 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:18.803 [2024-07-24 16:32:15.638257] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.803 [2024-07-24 16:32:15.638293] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.803 [2024-07-24 16:32:15.638351] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.062 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.321 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.321 "name": "Existed_Raid", 00:15:19.321 "uuid": "62fa42ff-26f1-4da9-97f5-5fdf610f79e9", 00:15:19.321 "strip_size_kb": 64, 00:15:19.321 "state": "offline", 00:15:19.321 "raid_level": "concat", 00:15:19.321 "superblock": true, 00:15:19.321 "num_base_bdevs": 2, 00:15:19.321 "num_base_bdevs_discovered": 1, 00:15:19.321 "num_base_bdevs_operational": 1, 00:15:19.321 "base_bdevs_list": [ 00:15:19.321 { 00:15:19.321 "name": null, 00:15:19.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.321 "is_configured": false, 00:15:19.321 "data_offset": 2048, 00:15:19.321 "data_size": 63488 00:15:19.321 }, 00:15:19.321 { 00:15:19.321 "name": "BaseBdev2", 00:15:19.321 "uuid": "0cdcba77-1207-4bd5-abc4-44e2d931e711", 00:15:19.321 "is_configured": true, 00:15:19.321 "data_offset": 2048, 00:15:19.321 "data_size": 63488 00:15:19.321 } 00:15:19.321 ] 00:15:19.321 }' 00:15:19.321 16:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.321 16:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:19.888 16:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:20.147 [2024-07-24 16:32:16.939125] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:20.147 [2024-07-24 16:32:16.939188] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:15:20.406 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:20.406 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:20.406 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.406 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1607110 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1607110 ']' 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1607110 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1607110 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1607110' 00:15:20.666 killing process with pid 1607110 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1607110 00:15:20.666 [2024-07-24 16:32:17.380495] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:20.666 16:32:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1607110 00:15:20.666 [2024-07-24 16:32:17.405035] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.570 16:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:22.570 00:15:22.570 real 0m11.922s 00:15:22.570 user 0m19.403s 00:15:22.570 sys 0m2.120s 00:15:22.570 16:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:22.570 16:32:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.570 ************************************ 00:15:22.570 END TEST raid_state_function_test_sb 00:15:22.570 ************************************ 00:15:22.570 16:32:19 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:22.570 16:32:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:22.570 16:32:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:22.571 16:32:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.571 ************************************ 00:15:22.571 START TEST raid_superblock_test 00:15:22.571 ************************************ 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1609268 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1609268 /var/tmp/spdk-raid.sock 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1609268 ']' 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.571 16:32:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.571 [2024-07-24 16:32:19.278571] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:15:22.571 [2024-07-24 16:32:19.278683] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609268 ] 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:22.571 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:22.571 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:22.830 [2024-07-24 16:32:19.506308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.089 [2024-07-24 16:32:19.796809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.348 [2024-07-24 16:32:20.143994] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.348 [2024-07-24 16:32:20.144029] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:23.606 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:23.865 malloc1 00:15:23.865 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:24.124 [2024-07-24 16:32:20.901978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:24.124 [2024-07-24 16:32:20.902038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.124 [2024-07-24 16:32:20.902070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:15:24.124 [2024-07-24 16:32:20.902086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.124 [2024-07-24 16:32:20.904825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.124 [2024-07-24 16:32:20.904860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:24.124 pt1 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:24.124 16:32:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:24.383 malloc2 00:15:24.383 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:24.641 [2024-07-24 16:32:21.390491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:24.641 [2024-07-24 16:32:21.390549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.641 [2024-07-24 16:32:21.390577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:15:24.641 [2024-07-24 16:32:21.390593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.641 [2024-07-24 16:32:21.393376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.641 [2024-07-24 16:32:21.393417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:24.641 pt2 00:15:24.641 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:24.641 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:24.641 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:24.900 [2024-07-24 16:32:21.619110] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:24.900 [2024-07-24 16:32:21.621424] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:24.900 [2024-07-24 16:32:21.621642] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:15:24.900 [2024-07-24 16:32:21.621664] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:24.900 [2024-07-24 16:32:21.621993] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:24.900 [2024-07-24 16:32:21.622218] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:15:24.900 [2024-07-24 16:32:21.622237] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:15:24.900 [2024-07-24 16:32:21.622428] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.900 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.158 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:25.158 "name": "raid_bdev1", 00:15:25.158 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:25.159 "strip_size_kb": 64, 00:15:25.159 "state": "online", 00:15:25.159 "raid_level": "concat", 00:15:25.159 "superblock": true, 00:15:25.159 "num_base_bdevs": 2, 00:15:25.159 "num_base_bdevs_discovered": 2, 00:15:25.159 "num_base_bdevs_operational": 2, 00:15:25.159 "base_bdevs_list": [ 00:15:25.159 { 00:15:25.159 "name": "pt1", 00:15:25.159 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.159 "is_configured": true, 00:15:25.159 "data_offset": 2048, 00:15:25.159 "data_size": 63488 00:15:25.159 }, 00:15:25.159 { 00:15:25.159 "name": "pt2", 00:15:25.159 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.159 "is_configured": true, 00:15:25.159 "data_offset": 2048, 00:15:25.159 "data_size": 63488 00:15:25.159 } 00:15:25.159 ] 00:15:25.159 }' 00:15:25.159 16:32:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:25.159 16:32:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:25.727 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:25.987 [2024-07-24 16:32:22.642166] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.987 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:25.987 "name": "raid_bdev1", 00:15:25.987 "aliases": [ 00:15:25.987 "f415e7b0-c6da-4496-bf39-5d74e7043112" 00:15:25.987 ], 00:15:25.987 "product_name": "Raid Volume", 00:15:25.987 "block_size": 512, 00:15:25.987 "num_blocks": 126976, 00:15:25.987 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:25.987 "assigned_rate_limits": { 00:15:25.987 "rw_ios_per_sec": 0, 00:15:25.987 "rw_mbytes_per_sec": 0, 00:15:25.987 "r_mbytes_per_sec": 0, 00:15:25.987 "w_mbytes_per_sec": 0 00:15:25.987 }, 00:15:25.987 "claimed": false, 00:15:25.987 "zoned": false, 00:15:25.987 "supported_io_types": { 00:15:25.987 "read": true, 00:15:25.987 "write": true, 00:15:25.987 "unmap": true, 00:15:25.987 "flush": true, 00:15:25.987 "reset": true, 00:15:25.987 "nvme_admin": false, 00:15:25.987 "nvme_io": false, 00:15:25.987 "nvme_io_md": false, 00:15:25.987 "write_zeroes": true, 00:15:25.987 "zcopy": false, 00:15:25.987 "get_zone_info": false, 00:15:25.987 "zone_management": false, 00:15:25.987 "zone_append": false, 00:15:25.987 "compare": false, 00:15:25.987 "compare_and_write": false, 00:15:25.987 "abort": false, 00:15:25.987 "seek_hole": false, 00:15:25.987 "seek_data": false, 00:15:25.987 "copy": false, 00:15:25.987 "nvme_iov_md": false 00:15:25.987 }, 00:15:25.987 "memory_domains": [ 00:15:25.987 { 00:15:25.987 "dma_device_id": "system", 00:15:25.987 "dma_device_type": 1 00:15:25.987 }, 00:15:25.987 { 00:15:25.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.987 "dma_device_type": 2 00:15:25.987 }, 00:15:25.987 { 00:15:25.987 "dma_device_id": "system", 00:15:25.987 "dma_device_type": 1 00:15:25.987 }, 00:15:25.987 { 00:15:25.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.987 "dma_device_type": 2 00:15:25.987 } 00:15:25.987 ], 00:15:25.987 "driver_specific": { 00:15:25.987 "raid": { 00:15:25.987 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:25.987 "strip_size_kb": 64, 00:15:25.987 "state": "online", 00:15:25.987 "raid_level": "concat", 00:15:25.987 "superblock": true, 00:15:25.987 "num_base_bdevs": 2, 00:15:25.987 "num_base_bdevs_discovered": 2, 00:15:25.987 "num_base_bdevs_operational": 2, 00:15:25.987 "base_bdevs_list": [ 00:15:25.987 { 00:15:25.987 "name": "pt1", 00:15:25.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.987 "is_configured": true, 00:15:25.987 "data_offset": 2048, 00:15:25.987 "data_size": 63488 00:15:25.987 }, 00:15:25.987 { 00:15:25.987 "name": "pt2", 00:15:25.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.987 "is_configured": true, 00:15:25.987 "data_offset": 2048, 00:15:25.987 "data_size": 63488 00:15:25.987 } 00:15:25.987 ] 00:15:25.987 } 00:15:25.987 } 00:15:25.987 }' 00:15:25.987 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:25.987 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:25.987 pt2' 00:15:25.987 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:25.987 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:25.987 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.246 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.246 "name": "pt1", 00:15:26.246 "aliases": [ 00:15:26.246 "00000000-0000-0000-0000-000000000001" 00:15:26.246 ], 00:15:26.246 "product_name": "passthru", 00:15:26.246 "block_size": 512, 00:15:26.246 "num_blocks": 65536, 00:15:26.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.246 "assigned_rate_limits": { 00:15:26.246 "rw_ios_per_sec": 0, 00:15:26.246 "rw_mbytes_per_sec": 0, 00:15:26.246 "r_mbytes_per_sec": 0, 00:15:26.246 "w_mbytes_per_sec": 0 00:15:26.246 }, 00:15:26.246 "claimed": true, 00:15:26.246 "claim_type": "exclusive_write", 00:15:26.246 "zoned": false, 00:15:26.246 "supported_io_types": { 00:15:26.246 "read": true, 00:15:26.247 "write": true, 00:15:26.247 "unmap": true, 00:15:26.247 "flush": true, 00:15:26.247 "reset": true, 00:15:26.247 "nvme_admin": false, 00:15:26.247 "nvme_io": false, 00:15:26.247 "nvme_io_md": false, 00:15:26.247 "write_zeroes": true, 00:15:26.247 "zcopy": true, 00:15:26.247 "get_zone_info": false, 00:15:26.247 "zone_management": false, 00:15:26.247 "zone_append": false, 00:15:26.247 "compare": false, 00:15:26.247 "compare_and_write": false, 00:15:26.247 "abort": true, 00:15:26.247 "seek_hole": false, 00:15:26.247 "seek_data": false, 00:15:26.247 "copy": true, 00:15:26.247 "nvme_iov_md": false 00:15:26.247 }, 00:15:26.247 "memory_domains": [ 00:15:26.247 { 00:15:26.247 "dma_device_id": "system", 00:15:26.247 "dma_device_type": 1 00:15:26.247 }, 00:15:26.247 { 00:15:26.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.247 "dma_device_type": 2 00:15:26.247 } 00:15:26.247 ], 00:15:26.247 "driver_specific": { 00:15:26.247 "passthru": { 00:15:26.247 "name": "pt1", 00:15:26.247 "base_bdev_name": "malloc1" 00:15:26.247 } 00:15:26.247 } 00:15:26.247 }' 00:15:26.247 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.247 16:32:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.247 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.247 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.247 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:26.247 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:26.247 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:26.506 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:26.765 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:26.765 "name": "pt2", 00:15:26.765 "aliases": [ 00:15:26.765 "00000000-0000-0000-0000-000000000002" 00:15:26.765 ], 00:15:26.765 "product_name": "passthru", 00:15:26.765 "block_size": 512, 00:15:26.765 "num_blocks": 65536, 00:15:26.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.765 "assigned_rate_limits": { 00:15:26.765 "rw_ios_per_sec": 0, 00:15:26.765 "rw_mbytes_per_sec": 0, 00:15:26.765 "r_mbytes_per_sec": 0, 00:15:26.765 "w_mbytes_per_sec": 0 00:15:26.765 }, 00:15:26.765 "claimed": true, 00:15:26.765 "claim_type": "exclusive_write", 00:15:26.765 "zoned": false, 00:15:26.765 "supported_io_types": { 00:15:26.765 "read": true, 00:15:26.765 "write": true, 00:15:26.765 "unmap": true, 00:15:26.765 "flush": true, 00:15:26.765 "reset": true, 00:15:26.765 "nvme_admin": false, 00:15:26.765 "nvme_io": false, 00:15:26.765 "nvme_io_md": false, 00:15:26.765 "write_zeroes": true, 00:15:26.765 "zcopy": true, 00:15:26.765 "get_zone_info": false, 00:15:26.765 "zone_management": false, 00:15:26.765 "zone_append": false, 00:15:26.765 "compare": false, 00:15:26.765 "compare_and_write": false, 00:15:26.765 "abort": true, 00:15:26.765 "seek_hole": false, 00:15:26.765 "seek_data": false, 00:15:26.765 "copy": true, 00:15:26.765 "nvme_iov_md": false 00:15:26.765 }, 00:15:26.765 "memory_domains": [ 00:15:26.765 { 00:15:26.765 "dma_device_id": "system", 00:15:26.765 "dma_device_type": 1 00:15:26.765 }, 00:15:26.765 { 00:15:26.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.765 "dma_device_type": 2 00:15:26.765 } 00:15:26.765 ], 00:15:26.765 "driver_specific": { 00:15:26.765 "passthru": { 00:15:26.765 "name": "pt2", 00:15:26.765 "base_bdev_name": "malloc2" 00:15:26.765 } 00:15:26.765 } 00:15:26.765 }' 00:15:26.765 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.765 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:26.765 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:26.765 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:27.024 16:32:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:27.283 [2024-07-24 16:32:24.045958] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.283 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=f415e7b0-c6da-4496-bf39-5d74e7043112 00:15:27.283 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z f415e7b0-c6da-4496-bf39-5d74e7043112 ']' 00:15:27.283 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:27.542 [2024-07-24 16:32:24.274262] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.542 [2024-07-24 16:32:24.274292] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.542 [2024-07-24 16:32:24.274381] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.542 [2024-07-24 16:32:24.274441] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.542 [2024-07-24 16:32:24.274463] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:15:27.542 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.542 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:27.802 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:27.802 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:27.802 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:27.802 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:28.061 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:28.061 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:28.320 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:28.320 16:32:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:28.579 [2024-07-24 16:32:25.401284] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:28.579 [2024-07-24 16:32:25.403600] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:28.579 [2024-07-24 16:32:25.403672] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:28.579 [2024-07-24 16:32:25.403729] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:28.579 [2024-07-24 16:32:25.403753] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.579 [2024-07-24 16:32:25.403770] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:15:28.579 request: 00:15:28.579 { 00:15:28.579 "name": "raid_bdev1", 00:15:28.579 "raid_level": "concat", 00:15:28.579 "base_bdevs": [ 00:15:28.579 "malloc1", 00:15:28.579 "malloc2" 00:15:28.579 ], 00:15:28.579 "strip_size_kb": 64, 00:15:28.579 "superblock": false, 00:15:28.579 "method": "bdev_raid_create", 00:15:28.579 "req_id": 1 00:15:28.579 } 00:15:28.579 Got JSON-RPC error response 00:15:28.579 response: 00:15:28.579 { 00:15:28.579 "code": -17, 00:15:28.579 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:28.579 } 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.579 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:28.838 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:28.838 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:28.838 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:29.097 [2024-07-24 16:32:25.858425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:29.097 [2024-07-24 16:32:25.858489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.097 [2024-07-24 16:32:25.858518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:15:29.097 [2024-07-24 16:32:25.858536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.097 [2024-07-24 16:32:25.861346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.097 [2024-07-24 16:32:25.861386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:29.097 [2024-07-24 16:32:25.861477] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:29.097 [2024-07-24 16:32:25.861562] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:29.097 pt1 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.097 16:32:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.356 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.356 "name": "raid_bdev1", 00:15:29.356 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:29.356 "strip_size_kb": 64, 00:15:29.356 "state": "configuring", 00:15:29.356 "raid_level": "concat", 00:15:29.356 "superblock": true, 00:15:29.356 "num_base_bdevs": 2, 00:15:29.356 "num_base_bdevs_discovered": 1, 00:15:29.356 "num_base_bdevs_operational": 2, 00:15:29.356 "base_bdevs_list": [ 00:15:29.356 { 00:15:29.356 "name": "pt1", 00:15:29.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:29.356 "is_configured": true, 00:15:29.357 "data_offset": 2048, 00:15:29.357 "data_size": 63488 00:15:29.357 }, 00:15:29.357 { 00:15:29.357 "name": null, 00:15:29.357 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.357 "is_configured": false, 00:15:29.357 "data_offset": 2048, 00:15:29.357 "data_size": 63488 00:15:29.357 } 00:15:29.357 ] 00:15:29.357 }' 00:15:29.357 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.357 16:32:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.925 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:15:29.925 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:29.925 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:29.925 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:30.185 [2024-07-24 16:32:26.853160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:30.185 [2024-07-24 16:32:26.853227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:30.185 [2024-07-24 16:32:26.853254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:15:30.185 [2024-07-24 16:32:26.853272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:30.185 [2024-07-24 16:32:26.853844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:30.185 [2024-07-24 16:32:26.853874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:30.185 [2024-07-24 16:32:26.853972] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:30.185 [2024-07-24 16:32:26.854009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:30.185 [2024-07-24 16:32:26.854185] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:15:30.185 [2024-07-24 16:32:26.854207] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.185 [2024-07-24 16:32:26.854501] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:30.185 [2024-07-24 16:32:26.854716] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:15:30.185 [2024-07-24 16:32:26.854730] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:15:30.185 [2024-07-24 16:32:26.854907] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.185 pt2 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.185 16:32:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.444 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.444 "name": "raid_bdev1", 00:15:30.444 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:30.444 "strip_size_kb": 64, 00:15:30.444 "state": "online", 00:15:30.444 "raid_level": "concat", 00:15:30.444 "superblock": true, 00:15:30.444 "num_base_bdevs": 2, 00:15:30.444 "num_base_bdevs_discovered": 2, 00:15:30.444 "num_base_bdevs_operational": 2, 00:15:30.444 "base_bdevs_list": [ 00:15:30.444 { 00:15:30.444 "name": "pt1", 00:15:30.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:30.444 "is_configured": true, 00:15:30.444 "data_offset": 2048, 00:15:30.444 "data_size": 63488 00:15:30.444 }, 00:15:30.444 { 00:15:30.444 "name": "pt2", 00:15:30.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:30.444 "is_configured": true, 00:15:30.444 "data_offset": 2048, 00:15:30.444 "data_size": 63488 00:15:30.444 } 00:15:30.444 ] 00:15:30.444 }' 00:15:30.445 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.445 16:32:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:31.012 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:31.013 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:31.272 [2024-07-24 16:32:27.888302] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.272 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:31.272 "name": "raid_bdev1", 00:15:31.272 "aliases": [ 00:15:31.272 "f415e7b0-c6da-4496-bf39-5d74e7043112" 00:15:31.272 ], 00:15:31.272 "product_name": "Raid Volume", 00:15:31.272 "block_size": 512, 00:15:31.272 "num_blocks": 126976, 00:15:31.272 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:31.272 "assigned_rate_limits": { 00:15:31.272 "rw_ios_per_sec": 0, 00:15:31.272 "rw_mbytes_per_sec": 0, 00:15:31.272 "r_mbytes_per_sec": 0, 00:15:31.272 "w_mbytes_per_sec": 0 00:15:31.272 }, 00:15:31.272 "claimed": false, 00:15:31.272 "zoned": false, 00:15:31.272 "supported_io_types": { 00:15:31.272 "read": true, 00:15:31.272 "write": true, 00:15:31.272 "unmap": true, 00:15:31.272 "flush": true, 00:15:31.272 "reset": true, 00:15:31.272 "nvme_admin": false, 00:15:31.272 "nvme_io": false, 00:15:31.272 "nvme_io_md": false, 00:15:31.272 "write_zeroes": true, 00:15:31.272 "zcopy": false, 00:15:31.272 "get_zone_info": false, 00:15:31.272 "zone_management": false, 00:15:31.272 "zone_append": false, 00:15:31.272 "compare": false, 00:15:31.272 "compare_and_write": false, 00:15:31.272 "abort": false, 00:15:31.272 "seek_hole": false, 00:15:31.272 "seek_data": false, 00:15:31.272 "copy": false, 00:15:31.272 "nvme_iov_md": false 00:15:31.272 }, 00:15:31.272 "memory_domains": [ 00:15:31.272 { 00:15:31.272 "dma_device_id": "system", 00:15:31.272 "dma_device_type": 1 00:15:31.272 }, 00:15:31.272 { 00:15:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.272 "dma_device_type": 2 00:15:31.272 }, 00:15:31.272 { 00:15:31.272 "dma_device_id": "system", 00:15:31.272 "dma_device_type": 1 00:15:31.272 }, 00:15:31.272 { 00:15:31.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.272 "dma_device_type": 2 00:15:31.272 } 00:15:31.272 ], 00:15:31.272 "driver_specific": { 00:15:31.272 "raid": { 00:15:31.272 "uuid": "f415e7b0-c6da-4496-bf39-5d74e7043112", 00:15:31.272 "strip_size_kb": 64, 00:15:31.272 "state": "online", 00:15:31.272 "raid_level": "concat", 00:15:31.272 "superblock": true, 00:15:31.272 "num_base_bdevs": 2, 00:15:31.272 "num_base_bdevs_discovered": 2, 00:15:31.272 "num_base_bdevs_operational": 2, 00:15:31.272 "base_bdevs_list": [ 00:15:31.272 { 00:15:31.272 "name": "pt1", 00:15:31.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.272 "is_configured": true, 00:15:31.272 "data_offset": 2048, 00:15:31.272 "data_size": 63488 00:15:31.272 }, 00:15:31.272 { 00:15:31.272 "name": "pt2", 00:15:31.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.272 "is_configured": true, 00:15:31.272 "data_offset": 2048, 00:15:31.272 "data_size": 63488 00:15:31.272 } 00:15:31.272 ] 00:15:31.272 } 00:15:31.272 } 00:15:31.272 }' 00:15:31.272 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.272 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:31.272 pt2' 00:15:31.272 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.272 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:31.272 16:32:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:31.532 "name": "pt1", 00:15:31.532 "aliases": [ 00:15:31.532 "00000000-0000-0000-0000-000000000001" 00:15:31.532 ], 00:15:31.532 "product_name": "passthru", 00:15:31.532 "block_size": 512, 00:15:31.532 "num_blocks": 65536, 00:15:31.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.532 "assigned_rate_limits": { 00:15:31.532 "rw_ios_per_sec": 0, 00:15:31.532 "rw_mbytes_per_sec": 0, 00:15:31.532 "r_mbytes_per_sec": 0, 00:15:31.532 "w_mbytes_per_sec": 0 00:15:31.532 }, 00:15:31.532 "claimed": true, 00:15:31.532 "claim_type": "exclusive_write", 00:15:31.532 "zoned": false, 00:15:31.532 "supported_io_types": { 00:15:31.532 "read": true, 00:15:31.532 "write": true, 00:15:31.532 "unmap": true, 00:15:31.532 "flush": true, 00:15:31.532 "reset": true, 00:15:31.532 "nvme_admin": false, 00:15:31.532 "nvme_io": false, 00:15:31.532 "nvme_io_md": false, 00:15:31.532 "write_zeroes": true, 00:15:31.532 "zcopy": true, 00:15:31.532 "get_zone_info": false, 00:15:31.532 "zone_management": false, 00:15:31.532 "zone_append": false, 00:15:31.532 "compare": false, 00:15:31.532 "compare_and_write": false, 00:15:31.532 "abort": true, 00:15:31.532 "seek_hole": false, 00:15:31.532 "seek_data": false, 00:15:31.532 "copy": true, 00:15:31.532 "nvme_iov_md": false 00:15:31.532 }, 00:15:31.532 "memory_domains": [ 00:15:31.532 { 00:15:31.532 "dma_device_id": "system", 00:15:31.532 "dma_device_type": 1 00:15:31.532 }, 00:15:31.532 { 00:15:31.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.532 "dma_device_type": 2 00:15:31.532 } 00:15:31.532 ], 00:15:31.532 "driver_specific": { 00:15:31.532 "passthru": { 00:15:31.532 "name": "pt1", 00:15:31.532 "base_bdev_name": "malloc1" 00:15:31.532 } 00:15:31.532 } 00:15:31.532 }' 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:31.532 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:31.796 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.055 "name": "pt2", 00:15:32.055 "aliases": [ 00:15:32.055 "00000000-0000-0000-0000-000000000002" 00:15:32.055 ], 00:15:32.055 "product_name": "passthru", 00:15:32.055 "block_size": 512, 00:15:32.055 "num_blocks": 65536, 00:15:32.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.055 "assigned_rate_limits": { 00:15:32.055 "rw_ios_per_sec": 0, 00:15:32.055 "rw_mbytes_per_sec": 0, 00:15:32.055 "r_mbytes_per_sec": 0, 00:15:32.055 "w_mbytes_per_sec": 0 00:15:32.055 }, 00:15:32.055 "claimed": true, 00:15:32.055 "claim_type": "exclusive_write", 00:15:32.055 "zoned": false, 00:15:32.055 "supported_io_types": { 00:15:32.055 "read": true, 00:15:32.055 "write": true, 00:15:32.055 "unmap": true, 00:15:32.055 "flush": true, 00:15:32.055 "reset": true, 00:15:32.055 "nvme_admin": false, 00:15:32.055 "nvme_io": false, 00:15:32.055 "nvme_io_md": false, 00:15:32.055 "write_zeroes": true, 00:15:32.055 "zcopy": true, 00:15:32.055 "get_zone_info": false, 00:15:32.055 "zone_management": false, 00:15:32.055 "zone_append": false, 00:15:32.055 "compare": false, 00:15:32.055 "compare_and_write": false, 00:15:32.055 "abort": true, 00:15:32.055 "seek_hole": false, 00:15:32.055 "seek_data": false, 00:15:32.055 "copy": true, 00:15:32.055 "nvme_iov_md": false 00:15:32.055 }, 00:15:32.055 "memory_domains": [ 00:15:32.055 { 00:15:32.055 "dma_device_id": "system", 00:15:32.055 "dma_device_type": 1 00:15:32.055 }, 00:15:32.055 { 00:15:32.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.055 "dma_device_type": 2 00:15:32.055 } 00:15:32.055 ], 00:15:32.055 "driver_specific": { 00:15:32.055 "passthru": { 00:15:32.055 "name": "pt2", 00:15:32.055 "base_bdev_name": "malloc2" 00:15:32.055 } 00:15:32.055 } 00:15:32.055 }' 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.055 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.313 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.313 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.313 16:32:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.313 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.313 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.313 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:32.313 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:32.572 [2024-07-24 16:32:29.276055] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' f415e7b0-c6da-4496-bf39-5d74e7043112 '!=' f415e7b0-c6da-4496-bf39-5d74e7043112 ']' 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1609268 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1609268 ']' 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1609268 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609268 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609268' 00:15:32.572 killing process with pid 1609268 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1609268 00:15:32.572 [2024-07-24 16:32:29.351953] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.572 [2024-07-24 16:32:29.352051] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.572 [2024-07-24 16:32:29.352109] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.572 16:32:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1609268 00:15:32.572 [2024-07-24 16:32:29.352128] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:15:32.831 [2024-07-24 16:32:29.541578] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.736 16:32:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:34.736 00:15:34.736 real 0m12.083s 00:15:34.736 user 0m20.005s 00:15:34.736 sys 0m2.061s 00:15:34.736 16:32:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.736 16:32:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.736 ************************************ 00:15:34.736 END TEST raid_superblock_test 00:15:34.736 ************************************ 00:15:34.736 16:32:31 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:34.736 16:32:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:34.736 16:32:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.736 16:32:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:34.736 ************************************ 00:15:34.736 START TEST raid_read_error_test 00:15:34.736 ************************************ 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:34.736 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.6VKriCDD5c 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1611525 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1611525 /var/tmp/spdk-raid.sock 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1611525 ']' 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:34.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.737 16:32:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.737 [2024-07-24 16:32:31.471695] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:15:34.737 [2024-07-24 16:32:31.471817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611525 ] 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:34.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:34.996 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:34.996 [2024-07-24 16:32:31.698795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.254 [2024-07-24 16:32:31.983754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.513 [2024-07-24 16:32:32.337169] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.513 [2024-07-24 16:32:32.337222] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.771 16:32:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.771 16:32:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:35.771 16:32:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:35.771 16:32:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:36.030 BaseBdev1_malloc 00:15:36.030 16:32:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:36.288 true 00:15:36.288 16:32:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:36.547 [2024-07-24 16:32:33.238304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:36.547 [2024-07-24 16:32:33.238370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.547 [2024-07-24 16:32:33.238396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:36.547 [2024-07-24 16:32:33.238419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.547 [2024-07-24 16:32:33.241194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.547 [2024-07-24 16:32:33.241233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:36.547 BaseBdev1 00:15:36.547 16:32:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:36.547 16:32:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:36.805 BaseBdev2_malloc 00:15:36.805 16:32:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:37.064 true 00:15:37.064 16:32:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:37.403 [2024-07-24 16:32:33.978161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:37.403 [2024-07-24 16:32:33.978218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.403 [2024-07-24 16:32:33.978242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:37.404 [2024-07-24 16:32:33.978262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.404 [2024-07-24 16:32:33.980979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.404 [2024-07-24 16:32:33.981016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:37.404 BaseBdev2 00:15:37.404 16:32:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:37.404 [2024-07-24 16:32:34.206848] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.404 [2024-07-24 16:32:34.209199] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.404 [2024-07-24 16:32:34.209457] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:15:37.404 [2024-07-24 16:32:34.209480] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.404 [2024-07-24 16:32:34.209813] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:37.404 [2024-07-24 16:32:34.210070] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:15:37.404 [2024-07-24 16:32:34.210086] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:15:37.404 [2024-07-24 16:32:34.210297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.404 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.663 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.663 "name": "raid_bdev1", 00:15:37.663 "uuid": "ea25c4ab-8076-4e01-b5a3-906d7347f4bb", 00:15:37.663 "strip_size_kb": 64, 00:15:37.663 "state": "online", 00:15:37.663 "raid_level": "concat", 00:15:37.663 "superblock": true, 00:15:37.663 "num_base_bdevs": 2, 00:15:37.663 "num_base_bdevs_discovered": 2, 00:15:37.663 "num_base_bdevs_operational": 2, 00:15:37.663 "base_bdevs_list": [ 00:15:37.663 { 00:15:37.663 "name": "BaseBdev1", 00:15:37.663 "uuid": "d85a7b45-1bd6-588c-90a2-afd658ffd3f2", 00:15:37.663 "is_configured": true, 00:15:37.663 "data_offset": 2048, 00:15:37.663 "data_size": 63488 00:15:37.663 }, 00:15:37.663 { 00:15:37.663 "name": "BaseBdev2", 00:15:37.663 "uuid": "51a92e5b-3a8e-597b-a6db-c5eba11dd992", 00:15:37.663 "is_configured": true, 00:15:37.663 "data_offset": 2048, 00:15:37.663 "data_size": 63488 00:15:37.663 } 00:15:37.663 ] 00:15:37.663 }' 00:15:37.663 16:32:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.663 16:32:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.229 16:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:38.229 16:32:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:38.488 [2024-07-24 16:32:35.123393] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.425 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.686 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.686 "name": "raid_bdev1", 00:15:39.686 "uuid": "ea25c4ab-8076-4e01-b5a3-906d7347f4bb", 00:15:39.686 "strip_size_kb": 64, 00:15:39.686 "state": "online", 00:15:39.686 "raid_level": "concat", 00:15:39.686 "superblock": true, 00:15:39.686 "num_base_bdevs": 2, 00:15:39.686 "num_base_bdevs_discovered": 2, 00:15:39.686 "num_base_bdevs_operational": 2, 00:15:39.686 "base_bdevs_list": [ 00:15:39.686 { 00:15:39.686 "name": "BaseBdev1", 00:15:39.686 "uuid": "d85a7b45-1bd6-588c-90a2-afd658ffd3f2", 00:15:39.686 "is_configured": true, 00:15:39.686 "data_offset": 2048, 00:15:39.686 "data_size": 63488 00:15:39.686 }, 00:15:39.686 { 00:15:39.686 "name": "BaseBdev2", 00:15:39.686 "uuid": "51a92e5b-3a8e-597b-a6db-c5eba11dd992", 00:15:39.686 "is_configured": true, 00:15:39.686 "data_offset": 2048, 00:15:39.686 "data_size": 63488 00:15:39.686 } 00:15:39.686 ] 00:15:39.686 }' 00:15:39.686 16:32:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.686 16:32:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.254 16:32:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:40.514 [2024-07-24 16:32:37.274559] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.514 [2024-07-24 16:32:37.274602] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.514 [2024-07-24 16:32:37.277902] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.514 [2024-07-24 16:32:37.277967] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.514 [2024-07-24 16:32:37.278007] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.514 [2024-07-24 16:32:37.278025] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:15:40.514 0 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1611525 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1611525 ']' 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1611525 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611525 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611525' 00:15:40.514 killing process with pid 1611525 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1611525 00:15:40.514 [2024-07-24 16:32:37.353389] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.514 16:32:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1611525 00:15:40.774 [2024-07-24 16:32:37.453895] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.6VKriCDD5c 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:42.681 00:15:42.681 real 0m7.918s 00:15:42.681 user 0m10.990s 00:15:42.681 sys 0m1.232s 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.681 16:32:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.681 ************************************ 00:15:42.681 END TEST raid_read_error_test 00:15:42.681 ************************************ 00:15:42.681 16:32:39 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:42.681 16:32:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:42.681 16:32:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.681 16:32:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.681 ************************************ 00:15:42.681 START TEST raid_write_error_test 00:15:42.681 ************************************ 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.2IQ89MloFP 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1612947 00:15:42.681 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1612947 /var/tmp/spdk-raid.sock 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1612947 ']' 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.682 16:32:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.682 [2024-07-24 16:32:39.464116] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:15:42.682 [2024-07-24 16:32:39.464245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612947 ] 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:42.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:42.941 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:42.941 [2024-07-24 16:32:39.689869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.201 [2024-07-24 16:32:39.971044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.460 [2024-07-24 16:32:40.311800] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.460 [2024-07-24 16:32:40.311841] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.719 16:32:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.719 16:32:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:43.719 16:32:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:43.719 16:32:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:43.978 BaseBdev1_malloc 00:15:43.978 16:32:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:44.236 true 00:15:44.236 16:32:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:44.496 [2024-07-24 16:32:41.199366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:44.496 [2024-07-24 16:32:41.199425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.496 [2024-07-24 16:32:41.199456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:44.496 [2024-07-24 16:32:41.199480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.496 [2024-07-24 16:32:41.202264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.496 [2024-07-24 16:32:41.202306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:44.496 BaseBdev1 00:15:44.496 16:32:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:44.496 16:32:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:44.755 BaseBdev2_malloc 00:15:44.755 16:32:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:45.014 true 00:15:45.014 16:32:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:45.273 [2024-07-24 16:32:41.925432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:45.273 [2024-07-24 16:32:41.925490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.273 [2024-07-24 16:32:41.925516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:45.273 [2024-07-24 16:32:41.925538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.273 [2024-07-24 16:32:41.928232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.273 [2024-07-24 16:32:41.928268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:45.273 BaseBdev2 00:15:45.273 16:32:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:45.551 [2024-07-24 16:32:42.150119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.551 [2024-07-24 16:32:42.152470] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.551 [2024-07-24 16:32:42.152724] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:15:45.551 [2024-07-24 16:32:42.152747] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:45.551 [2024-07-24 16:32:42.153086] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:45.551 [2024-07-24 16:32:42.153363] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:15:45.551 [2024-07-24 16:32:42.153379] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:15:45.551 [2024-07-24 16:32:42.153584] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.551 "name": "raid_bdev1", 00:15:45.551 "uuid": "2a2885ad-48c2-4d44-ba24-97e2cd90c7c9", 00:15:45.551 "strip_size_kb": 64, 00:15:45.551 "state": "online", 00:15:45.551 "raid_level": "concat", 00:15:45.551 "superblock": true, 00:15:45.551 "num_base_bdevs": 2, 00:15:45.551 "num_base_bdevs_discovered": 2, 00:15:45.551 "num_base_bdevs_operational": 2, 00:15:45.551 "base_bdevs_list": [ 00:15:45.551 { 00:15:45.551 "name": "BaseBdev1", 00:15:45.551 "uuid": "d6e72f45-7a27-5f1c-8151-7d93169bc411", 00:15:45.551 "is_configured": true, 00:15:45.551 "data_offset": 2048, 00:15:45.551 "data_size": 63488 00:15:45.551 }, 00:15:45.551 { 00:15:45.551 "name": "BaseBdev2", 00:15:45.551 "uuid": "d017c632-e240-504e-9b89-1f19379c4696", 00:15:45.551 "is_configured": true, 00:15:45.551 "data_offset": 2048, 00:15:45.551 "data_size": 63488 00:15:45.551 } 00:15:45.551 ] 00:15:45.551 }' 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.551 16:32:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.160 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:46.160 16:32:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:46.420 [2024-07-24 16:32:43.066581] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:47.358 16:32:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.358 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.617 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.617 "name": "raid_bdev1", 00:15:47.617 "uuid": "2a2885ad-48c2-4d44-ba24-97e2cd90c7c9", 00:15:47.617 "strip_size_kb": 64, 00:15:47.617 "state": "online", 00:15:47.617 "raid_level": "concat", 00:15:47.617 "superblock": true, 00:15:47.617 "num_base_bdevs": 2, 00:15:47.617 "num_base_bdevs_discovered": 2, 00:15:47.617 "num_base_bdevs_operational": 2, 00:15:47.617 "base_bdevs_list": [ 00:15:47.617 { 00:15:47.617 "name": "BaseBdev1", 00:15:47.617 "uuid": "d6e72f45-7a27-5f1c-8151-7d93169bc411", 00:15:47.617 "is_configured": true, 00:15:47.617 "data_offset": 2048, 00:15:47.617 "data_size": 63488 00:15:47.617 }, 00:15:47.617 { 00:15:47.617 "name": "BaseBdev2", 00:15:47.617 "uuid": "d017c632-e240-504e-9b89-1f19379c4696", 00:15:47.617 "is_configured": true, 00:15:47.617 "data_offset": 2048, 00:15:47.617 "data_size": 63488 00:15:47.617 } 00:15:47.617 ] 00:15:47.617 }' 00:15:47.617 16:32:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.617 16:32:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.186 16:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:48.445 [2024-07-24 16:32:45.218111] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.445 [2024-07-24 16:32:45.218163] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.445 [2024-07-24 16:32:45.221456] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.445 [2024-07-24 16:32:45.221509] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.445 [2024-07-24 16:32:45.221548] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.445 [2024-07-24 16:32:45.221569] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:15:48.445 0 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1612947 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1612947 ']' 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1612947 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1612947 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1612947' 00:15:48.446 killing process with pid 1612947 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1612947 00:15:48.446 [2024-07-24 16:32:45.295511] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.446 16:32:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1612947 00:15:48.704 [2024-07-24 16:32:45.399596] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.2IQ89MloFP 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:50.611 00:15:50.611 real 0m7.866s 00:15:50.611 user 0m10.961s 00:15:50.611 sys 0m1.172s 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.611 16:32:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.611 ************************************ 00:15:50.611 END TEST raid_write_error_test 00:15:50.611 ************************************ 00:15:50.611 16:32:47 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:50.611 16:32:47 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:50.611 16:32:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:50.611 16:32:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.611 16:32:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.611 ************************************ 00:15:50.611 START TEST raid_state_function_test 00:15:50.611 ************************************ 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:50.611 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1614361 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1614361' 00:15:50.612 Process raid pid: 1614361 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1614361 /var/tmp/spdk-raid.sock 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1614361 ']' 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:50.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:50.612 16:32:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.612 [2024-07-24 16:32:47.406854] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:15:50.612 [2024-07-24 16:32:47.406970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:50.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:50.872 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:50.872 [2024-07-24 16:32:47.636898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.131 [2024-07-24 16:32:47.922169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.700 [2024-07-24 16:32:48.272268] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.700 [2024-07-24 16:32:48.272304] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.700 16:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.700 16:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:51.700 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:51.960 [2024-07-24 16:32:48.673406] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.960 [2024-07-24 16:32:48.673460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.960 [2024-07-24 16:32:48.673475] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.960 [2024-07-24 16:32:48.673492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.960 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.231 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.231 "name": "Existed_Raid", 00:15:52.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.231 "strip_size_kb": 0, 00:15:52.231 "state": "configuring", 00:15:52.231 "raid_level": "raid1", 00:15:52.231 "superblock": false, 00:15:52.231 "num_base_bdevs": 2, 00:15:52.231 "num_base_bdevs_discovered": 0, 00:15:52.231 "num_base_bdevs_operational": 2, 00:15:52.231 "base_bdevs_list": [ 00:15:52.231 { 00:15:52.231 "name": "BaseBdev1", 00:15:52.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.231 "is_configured": false, 00:15:52.231 "data_offset": 0, 00:15:52.231 "data_size": 0 00:15:52.231 }, 00:15:52.231 { 00:15:52.231 "name": "BaseBdev2", 00:15:52.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.231 "is_configured": false, 00:15:52.231 "data_offset": 0, 00:15:52.231 "data_size": 0 00:15:52.231 } 00:15:52.231 ] 00:15:52.231 }' 00:15:52.231 16:32:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.231 16:32:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.804 16:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:53.062 [2024-07-24 16:32:49.704068] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.062 [2024-07-24 16:32:49.704107] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:15:53.062 16:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:53.321 [2024-07-24 16:32:49.928702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.321 [2024-07-24 16:32:49.928749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.321 [2024-07-24 16:32:49.928763] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.321 [2024-07-24 16:32:49.928779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.321 16:32:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:53.580 [2024-07-24 16:32:50.196564] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.580 BaseBdev1 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.580 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.839 [ 00:15:53.839 { 00:15:53.839 "name": "BaseBdev1", 00:15:53.839 "aliases": [ 00:15:53.839 "4746b2cd-810e-41aa-b76a-7e18173b08b2" 00:15:53.839 ], 00:15:53.839 "product_name": "Malloc disk", 00:15:53.839 "block_size": 512, 00:15:53.839 "num_blocks": 65536, 00:15:53.839 "uuid": "4746b2cd-810e-41aa-b76a-7e18173b08b2", 00:15:53.839 "assigned_rate_limits": { 00:15:53.839 "rw_ios_per_sec": 0, 00:15:53.839 "rw_mbytes_per_sec": 0, 00:15:53.839 "r_mbytes_per_sec": 0, 00:15:53.839 "w_mbytes_per_sec": 0 00:15:53.839 }, 00:15:53.839 "claimed": true, 00:15:53.839 "claim_type": "exclusive_write", 00:15:53.839 "zoned": false, 00:15:53.839 "supported_io_types": { 00:15:53.839 "read": true, 00:15:53.839 "write": true, 00:15:53.839 "unmap": true, 00:15:53.839 "flush": true, 00:15:53.839 "reset": true, 00:15:53.839 "nvme_admin": false, 00:15:53.839 "nvme_io": false, 00:15:53.839 "nvme_io_md": false, 00:15:53.839 "write_zeroes": true, 00:15:53.839 "zcopy": true, 00:15:53.839 "get_zone_info": false, 00:15:53.839 "zone_management": false, 00:15:53.839 "zone_append": false, 00:15:53.839 "compare": false, 00:15:53.839 "compare_and_write": false, 00:15:53.839 "abort": true, 00:15:53.839 "seek_hole": false, 00:15:53.839 "seek_data": false, 00:15:53.839 "copy": true, 00:15:53.839 "nvme_iov_md": false 00:15:53.839 }, 00:15:53.839 "memory_domains": [ 00:15:53.839 { 00:15:53.839 "dma_device_id": "system", 00:15:53.839 "dma_device_type": 1 00:15:53.839 }, 00:15:53.839 { 00:15:53.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.839 "dma_device_type": 2 00:15:53.839 } 00:15:53.839 ], 00:15:53.839 "driver_specific": {} 00:15:53.839 } 00:15:53.839 ] 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.839 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.098 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.098 "name": "Existed_Raid", 00:15:54.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.098 "strip_size_kb": 0, 00:15:54.098 "state": "configuring", 00:15:54.098 "raid_level": "raid1", 00:15:54.098 "superblock": false, 00:15:54.098 "num_base_bdevs": 2, 00:15:54.098 "num_base_bdevs_discovered": 1, 00:15:54.098 "num_base_bdevs_operational": 2, 00:15:54.098 "base_bdevs_list": [ 00:15:54.098 { 00:15:54.098 "name": "BaseBdev1", 00:15:54.098 "uuid": "4746b2cd-810e-41aa-b76a-7e18173b08b2", 00:15:54.098 "is_configured": true, 00:15:54.098 "data_offset": 0, 00:15:54.098 "data_size": 65536 00:15:54.098 }, 00:15:54.098 { 00:15:54.098 "name": "BaseBdev2", 00:15:54.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.099 "is_configured": false, 00:15:54.099 "data_offset": 0, 00:15:54.099 "data_size": 0 00:15:54.099 } 00:15:54.099 ] 00:15:54.099 }' 00:15:54.099 16:32:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.099 16:32:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.666 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:54.924 [2024-07-24 16:32:51.688613] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:54.924 [2024-07-24 16:32:51.688667] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:15:54.924 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:55.182 [2024-07-24 16:32:51.917298] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.182 [2024-07-24 16:32:51.919594] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.182 [2024-07-24 16:32:51.919637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.182 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.183 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.183 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.183 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.183 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.183 16:32:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.441 16:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.441 "name": "Existed_Raid", 00:15:55.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.441 "strip_size_kb": 0, 00:15:55.441 "state": "configuring", 00:15:55.441 "raid_level": "raid1", 00:15:55.441 "superblock": false, 00:15:55.441 "num_base_bdevs": 2, 00:15:55.441 "num_base_bdevs_discovered": 1, 00:15:55.441 "num_base_bdevs_operational": 2, 00:15:55.441 "base_bdevs_list": [ 00:15:55.441 { 00:15:55.441 "name": "BaseBdev1", 00:15:55.441 "uuid": "4746b2cd-810e-41aa-b76a-7e18173b08b2", 00:15:55.441 "is_configured": true, 00:15:55.441 "data_offset": 0, 00:15:55.441 "data_size": 65536 00:15:55.441 }, 00:15:55.441 { 00:15:55.441 "name": "BaseBdev2", 00:15:55.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.441 "is_configured": false, 00:15:55.441 "data_offset": 0, 00:15:55.441 "data_size": 0 00:15:55.441 } 00:15:55.441 ] 00:15:55.441 }' 00:15:55.441 16:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.441 16:32:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.009 16:32:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.268 [2024-07-24 16:32:52.993466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.268 [2024-07-24 16:32:52.993517] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:15:56.268 [2024-07-24 16:32:52.993536] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:56.268 [2024-07-24 16:32:52.993867] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:56.268 [2024-07-24 16:32:52.994114] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:15:56.268 [2024-07-24 16:32:52.994132] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:15:56.268 [2024-07-24 16:32:52.994453] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.268 BaseBdev2 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.268 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.527 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.784 [ 00:15:56.784 { 00:15:56.784 "name": "BaseBdev2", 00:15:56.784 "aliases": [ 00:15:56.785 "917d25f6-b157-4853-a693-743a3d6b4e9a" 00:15:56.785 ], 00:15:56.785 "product_name": "Malloc disk", 00:15:56.785 "block_size": 512, 00:15:56.785 "num_blocks": 65536, 00:15:56.785 "uuid": "917d25f6-b157-4853-a693-743a3d6b4e9a", 00:15:56.785 "assigned_rate_limits": { 00:15:56.785 "rw_ios_per_sec": 0, 00:15:56.785 "rw_mbytes_per_sec": 0, 00:15:56.785 "r_mbytes_per_sec": 0, 00:15:56.785 "w_mbytes_per_sec": 0 00:15:56.785 }, 00:15:56.785 "claimed": true, 00:15:56.785 "claim_type": "exclusive_write", 00:15:56.785 "zoned": false, 00:15:56.785 "supported_io_types": { 00:15:56.785 "read": true, 00:15:56.785 "write": true, 00:15:56.785 "unmap": true, 00:15:56.785 "flush": true, 00:15:56.785 "reset": true, 00:15:56.785 "nvme_admin": false, 00:15:56.785 "nvme_io": false, 00:15:56.785 "nvme_io_md": false, 00:15:56.785 "write_zeroes": true, 00:15:56.785 "zcopy": true, 00:15:56.785 "get_zone_info": false, 00:15:56.785 "zone_management": false, 00:15:56.785 "zone_append": false, 00:15:56.785 "compare": false, 00:15:56.785 "compare_and_write": false, 00:15:56.785 "abort": true, 00:15:56.785 "seek_hole": false, 00:15:56.785 "seek_data": false, 00:15:56.785 "copy": true, 00:15:56.785 "nvme_iov_md": false 00:15:56.785 }, 00:15:56.785 "memory_domains": [ 00:15:56.785 { 00:15:56.785 "dma_device_id": "system", 00:15:56.785 "dma_device_type": 1 00:15:56.785 }, 00:15:56.785 { 00:15:56.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.785 "dma_device_type": 2 00:15:56.785 } 00:15:56.785 ], 00:15:56.785 "driver_specific": {} 00:15:56.785 } 00:15:56.785 ] 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.785 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.044 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.044 "name": "Existed_Raid", 00:15:57.044 "uuid": "9bb8eb42-c6f9-41d5-b567-7b7c89e1d7b4", 00:15:57.044 "strip_size_kb": 0, 00:15:57.044 "state": "online", 00:15:57.044 "raid_level": "raid1", 00:15:57.044 "superblock": false, 00:15:57.044 "num_base_bdevs": 2, 00:15:57.044 "num_base_bdevs_discovered": 2, 00:15:57.044 "num_base_bdevs_operational": 2, 00:15:57.044 "base_bdevs_list": [ 00:15:57.044 { 00:15:57.044 "name": "BaseBdev1", 00:15:57.044 "uuid": "4746b2cd-810e-41aa-b76a-7e18173b08b2", 00:15:57.044 "is_configured": true, 00:15:57.044 "data_offset": 0, 00:15:57.044 "data_size": 65536 00:15:57.044 }, 00:15:57.044 { 00:15:57.044 "name": "BaseBdev2", 00:15:57.044 "uuid": "917d25f6-b157-4853-a693-743a3d6b4e9a", 00:15:57.044 "is_configured": true, 00:15:57.044 "data_offset": 0, 00:15:57.044 "data_size": 65536 00:15:57.044 } 00:15:57.044 ] 00:15:57.044 }' 00:15:57.044 16:32:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.044 16:32:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:57.611 [2024-07-24 16:32:54.413936] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.611 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:57.611 "name": "Existed_Raid", 00:15:57.611 "aliases": [ 00:15:57.611 "9bb8eb42-c6f9-41d5-b567-7b7c89e1d7b4" 00:15:57.611 ], 00:15:57.611 "product_name": "Raid Volume", 00:15:57.611 "block_size": 512, 00:15:57.611 "num_blocks": 65536, 00:15:57.611 "uuid": "9bb8eb42-c6f9-41d5-b567-7b7c89e1d7b4", 00:15:57.611 "assigned_rate_limits": { 00:15:57.611 "rw_ios_per_sec": 0, 00:15:57.611 "rw_mbytes_per_sec": 0, 00:15:57.611 "r_mbytes_per_sec": 0, 00:15:57.611 "w_mbytes_per_sec": 0 00:15:57.611 }, 00:15:57.611 "claimed": false, 00:15:57.611 "zoned": false, 00:15:57.611 "supported_io_types": { 00:15:57.611 "read": true, 00:15:57.611 "write": true, 00:15:57.611 "unmap": false, 00:15:57.611 "flush": false, 00:15:57.611 "reset": true, 00:15:57.611 "nvme_admin": false, 00:15:57.611 "nvme_io": false, 00:15:57.611 "nvme_io_md": false, 00:15:57.611 "write_zeroes": true, 00:15:57.612 "zcopy": false, 00:15:57.612 "get_zone_info": false, 00:15:57.612 "zone_management": false, 00:15:57.612 "zone_append": false, 00:15:57.612 "compare": false, 00:15:57.612 "compare_and_write": false, 00:15:57.612 "abort": false, 00:15:57.612 "seek_hole": false, 00:15:57.612 "seek_data": false, 00:15:57.612 "copy": false, 00:15:57.612 "nvme_iov_md": false 00:15:57.612 }, 00:15:57.612 "memory_domains": [ 00:15:57.612 { 00:15:57.612 "dma_device_id": "system", 00:15:57.612 "dma_device_type": 1 00:15:57.612 }, 00:15:57.612 { 00:15:57.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.612 "dma_device_type": 2 00:15:57.612 }, 00:15:57.612 { 00:15:57.612 "dma_device_id": "system", 00:15:57.612 "dma_device_type": 1 00:15:57.612 }, 00:15:57.612 { 00:15:57.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.612 "dma_device_type": 2 00:15:57.612 } 00:15:57.612 ], 00:15:57.612 "driver_specific": { 00:15:57.612 "raid": { 00:15:57.612 "uuid": "9bb8eb42-c6f9-41d5-b567-7b7c89e1d7b4", 00:15:57.612 "strip_size_kb": 0, 00:15:57.612 "state": "online", 00:15:57.612 "raid_level": "raid1", 00:15:57.612 "superblock": false, 00:15:57.612 "num_base_bdevs": 2, 00:15:57.612 "num_base_bdevs_discovered": 2, 00:15:57.612 "num_base_bdevs_operational": 2, 00:15:57.612 "base_bdevs_list": [ 00:15:57.612 { 00:15:57.612 "name": "BaseBdev1", 00:15:57.612 "uuid": "4746b2cd-810e-41aa-b76a-7e18173b08b2", 00:15:57.612 "is_configured": true, 00:15:57.612 "data_offset": 0, 00:15:57.612 "data_size": 65536 00:15:57.612 }, 00:15:57.612 { 00:15:57.612 "name": "BaseBdev2", 00:15:57.612 "uuid": "917d25f6-b157-4853-a693-743a3d6b4e9a", 00:15:57.612 "is_configured": true, 00:15:57.612 "data_offset": 0, 00:15:57.612 "data_size": 65536 00:15:57.612 } 00:15:57.612 ] 00:15:57.612 } 00:15:57.612 } 00:15:57.612 }' 00:15:57.612 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.870 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:57.870 BaseBdev2' 00:15:57.870 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.870 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.870 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:57.870 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.870 "name": "BaseBdev1", 00:15:57.870 "aliases": [ 00:15:57.870 "4746b2cd-810e-41aa-b76a-7e18173b08b2" 00:15:57.870 ], 00:15:57.870 "product_name": "Malloc disk", 00:15:57.870 "block_size": 512, 00:15:57.870 "num_blocks": 65536, 00:15:57.870 "uuid": "4746b2cd-810e-41aa-b76a-7e18173b08b2", 00:15:57.870 "assigned_rate_limits": { 00:15:57.870 "rw_ios_per_sec": 0, 00:15:57.870 "rw_mbytes_per_sec": 0, 00:15:57.870 "r_mbytes_per_sec": 0, 00:15:57.870 "w_mbytes_per_sec": 0 00:15:57.870 }, 00:15:57.870 "claimed": true, 00:15:57.870 "claim_type": "exclusive_write", 00:15:57.870 "zoned": false, 00:15:57.870 "supported_io_types": { 00:15:57.870 "read": true, 00:15:57.870 "write": true, 00:15:57.870 "unmap": true, 00:15:57.870 "flush": true, 00:15:57.870 "reset": true, 00:15:57.870 "nvme_admin": false, 00:15:57.870 "nvme_io": false, 00:15:57.870 "nvme_io_md": false, 00:15:57.870 "write_zeroes": true, 00:15:57.870 "zcopy": true, 00:15:57.870 "get_zone_info": false, 00:15:57.870 "zone_management": false, 00:15:57.870 "zone_append": false, 00:15:57.870 "compare": false, 00:15:57.870 "compare_and_write": false, 00:15:57.870 "abort": true, 00:15:57.870 "seek_hole": false, 00:15:57.870 "seek_data": false, 00:15:57.870 "copy": true, 00:15:57.870 "nvme_iov_md": false 00:15:57.870 }, 00:15:57.870 "memory_domains": [ 00:15:57.870 { 00:15:57.870 "dma_device_id": "system", 00:15:57.870 "dma_device_type": 1 00:15:57.870 }, 00:15:57.870 { 00:15:57.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.870 "dma_device_type": 2 00:15:57.870 } 00:15:57.870 ], 00:15:57.870 "driver_specific": {} 00:15:57.870 }' 00:15:57.870 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.129 16:32:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.388 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.388 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.388 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.388 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.388 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:58.646 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.646 "name": "BaseBdev2", 00:15:58.646 "aliases": [ 00:15:58.646 "917d25f6-b157-4853-a693-743a3d6b4e9a" 00:15:58.646 ], 00:15:58.646 "product_name": "Malloc disk", 00:15:58.646 "block_size": 512, 00:15:58.646 "num_blocks": 65536, 00:15:58.646 "uuid": "917d25f6-b157-4853-a693-743a3d6b4e9a", 00:15:58.646 "assigned_rate_limits": { 00:15:58.646 "rw_ios_per_sec": 0, 00:15:58.646 "rw_mbytes_per_sec": 0, 00:15:58.646 "r_mbytes_per_sec": 0, 00:15:58.646 "w_mbytes_per_sec": 0 00:15:58.646 }, 00:15:58.646 "claimed": true, 00:15:58.646 "claim_type": "exclusive_write", 00:15:58.646 "zoned": false, 00:15:58.646 "supported_io_types": { 00:15:58.646 "read": true, 00:15:58.646 "write": true, 00:15:58.646 "unmap": true, 00:15:58.646 "flush": true, 00:15:58.646 "reset": true, 00:15:58.646 "nvme_admin": false, 00:15:58.646 "nvme_io": false, 00:15:58.646 "nvme_io_md": false, 00:15:58.646 "write_zeroes": true, 00:15:58.647 "zcopy": true, 00:15:58.647 "get_zone_info": false, 00:15:58.647 "zone_management": false, 00:15:58.647 "zone_append": false, 00:15:58.647 "compare": false, 00:15:58.647 "compare_and_write": false, 00:15:58.647 "abort": true, 00:15:58.647 "seek_hole": false, 00:15:58.647 "seek_data": false, 00:15:58.647 "copy": true, 00:15:58.647 "nvme_iov_md": false 00:15:58.647 }, 00:15:58.647 "memory_domains": [ 00:15:58.647 { 00:15:58.647 "dma_device_id": "system", 00:15:58.647 "dma_device_type": 1 00:15:58.647 }, 00:15:58.647 { 00:15:58.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.647 "dma_device_type": 2 00:15:58.647 } 00:15:58.647 ], 00:15:58.647 "driver_specific": {} 00:15:58.647 }' 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.647 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.905 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.906 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.906 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.906 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.906 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:59.164 [2024-07-24 16:32:55.829473] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.165 16:32:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.462 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.462 "name": "Existed_Raid", 00:15:59.462 "uuid": "9bb8eb42-c6f9-41d5-b567-7b7c89e1d7b4", 00:15:59.462 "strip_size_kb": 0, 00:15:59.462 "state": "online", 00:15:59.462 "raid_level": "raid1", 00:15:59.462 "superblock": false, 00:15:59.462 "num_base_bdevs": 2, 00:15:59.462 "num_base_bdevs_discovered": 1, 00:15:59.462 "num_base_bdevs_operational": 1, 00:15:59.462 "base_bdevs_list": [ 00:15:59.462 { 00:15:59.462 "name": null, 00:15:59.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.462 "is_configured": false, 00:15:59.462 "data_offset": 0, 00:15:59.462 "data_size": 65536 00:15:59.462 }, 00:15:59.462 { 00:15:59.462 "name": "BaseBdev2", 00:15:59.462 "uuid": "917d25f6-b157-4853-a693-743a3d6b4e9a", 00:15:59.462 "is_configured": true, 00:15:59.462 "data_offset": 0, 00:15:59.462 "data_size": 65536 00:15:59.462 } 00:15:59.462 ] 00:15:59.462 }' 00:15:59.462 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.462 16:32:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.041 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:00.041 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.041 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.041 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.300 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.300 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.300 16:32:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:00.300 [2024-07-24 16:32:57.118407] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.300 [2024-07-24 16:32:57.118514] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.620 [2024-07-24 16:32:57.246503] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.620 [2024-07-24 16:32:57.246556] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.620 [2024-07-24 16:32:57.246574] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:00.620 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.620 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.621 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.621 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:00.879 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:00.879 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1614361 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1614361 ']' 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1614361 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1614361 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1614361' 00:16:00.880 killing process with pid 1614361 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1614361 00:16:00.880 [2024-07-24 16:32:57.538946] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:00.880 16:32:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1614361 00:16:00.880 [2024-07-24 16:32:57.561273] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:02.784 00:16:02.784 real 0m11.929s 00:16:02.784 user 0m19.421s 00:16:02.784 sys 0m2.081s 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.784 ************************************ 00:16:02.784 END TEST raid_state_function_test 00:16:02.784 ************************************ 00:16:02.784 16:32:59 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:02.784 16:32:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:02.784 16:32:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.784 16:32:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.784 ************************************ 00:16:02.784 START TEST raid_state_function_test_sb 00:16:02.784 ************************************ 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1616693 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1616693' 00:16:02.784 Process raid pid: 1616693 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1616693 /var/tmp/spdk-raid.sock 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1616693 ']' 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:02.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.784 16:32:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.784 [2024-07-24 16:32:59.510872] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:16:02.784 [2024-07-24 16:32:59.511117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:03.044 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:03.044 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:03.044 [2024-07-24 16:32:59.878681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.611 [2024-07-24 16:33:00.173481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.870 [2024-07-24 16:33:00.511691] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.870 [2024-07-24 16:33:00.511727] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.870 16:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.870 16:33:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:03.870 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:04.129 [2024-07-24 16:33:00.891789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.129 [2024-07-24 16:33:00.891844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.129 [2024-07-24 16:33:00.891859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.129 [2024-07-24 16:33:00.891876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.129 16:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.387 16:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.387 "name": "Existed_Raid", 00:16:04.387 "uuid": "a73bea83-53e1-493e-b08a-99ed54852b03", 00:16:04.387 "strip_size_kb": 0, 00:16:04.387 "state": "configuring", 00:16:04.387 "raid_level": "raid1", 00:16:04.387 "superblock": true, 00:16:04.387 "num_base_bdevs": 2, 00:16:04.387 "num_base_bdevs_discovered": 0, 00:16:04.387 "num_base_bdevs_operational": 2, 00:16:04.387 "base_bdevs_list": [ 00:16:04.387 { 00:16:04.387 "name": "BaseBdev1", 00:16:04.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.387 "is_configured": false, 00:16:04.387 "data_offset": 0, 00:16:04.387 "data_size": 0 00:16:04.387 }, 00:16:04.387 { 00:16:04.387 "name": "BaseBdev2", 00:16:04.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.388 "is_configured": false, 00:16:04.388 "data_offset": 0, 00:16:04.388 "data_size": 0 00:16:04.388 } 00:16:04.388 ] 00:16:04.388 }' 00:16:04.388 16:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.388 16:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.955 16:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:05.214 [2024-07-24 16:33:01.850224] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.214 [2024-07-24 16:33:01.850264] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:16:05.214 16:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:05.473 [2024-07-24 16:33:02.078873] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.473 [2024-07-24 16:33:02.078918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.473 [2024-07-24 16:33:02.078932] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:05.473 [2024-07-24 16:33:02.078948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:05.473 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:05.732 [2024-07-24 16:33:02.350645] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:05.732 BaseBdev1 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:05.732 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.991 [ 00:16:05.991 { 00:16:05.991 "name": "BaseBdev1", 00:16:05.991 "aliases": [ 00:16:05.991 "cd208d23-1677-4b16-8935-a64c6119458c" 00:16:05.991 ], 00:16:05.991 "product_name": "Malloc disk", 00:16:05.991 "block_size": 512, 00:16:05.991 "num_blocks": 65536, 00:16:05.991 "uuid": "cd208d23-1677-4b16-8935-a64c6119458c", 00:16:05.991 "assigned_rate_limits": { 00:16:05.991 "rw_ios_per_sec": 0, 00:16:05.991 "rw_mbytes_per_sec": 0, 00:16:05.991 "r_mbytes_per_sec": 0, 00:16:05.991 "w_mbytes_per_sec": 0 00:16:05.991 }, 00:16:05.991 "claimed": true, 00:16:05.991 "claim_type": "exclusive_write", 00:16:05.991 "zoned": false, 00:16:05.991 "supported_io_types": { 00:16:05.991 "read": true, 00:16:05.991 "write": true, 00:16:05.991 "unmap": true, 00:16:05.991 "flush": true, 00:16:05.991 "reset": true, 00:16:05.991 "nvme_admin": false, 00:16:05.991 "nvme_io": false, 00:16:05.991 "nvme_io_md": false, 00:16:05.991 "write_zeroes": true, 00:16:05.991 "zcopy": true, 00:16:05.991 "get_zone_info": false, 00:16:05.991 "zone_management": false, 00:16:05.991 "zone_append": false, 00:16:05.991 "compare": false, 00:16:05.991 "compare_and_write": false, 00:16:05.991 "abort": true, 00:16:05.991 "seek_hole": false, 00:16:05.991 "seek_data": false, 00:16:05.991 "copy": true, 00:16:05.991 "nvme_iov_md": false 00:16:05.991 }, 00:16:05.991 "memory_domains": [ 00:16:05.991 { 00:16:05.991 "dma_device_id": "system", 00:16:05.991 "dma_device_type": 1 00:16:05.991 }, 00:16:05.991 { 00:16:05.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.991 "dma_device_type": 2 00:16:05.991 } 00:16:05.991 ], 00:16:05.991 "driver_specific": {} 00:16:05.991 } 00:16:05.991 ] 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.991 16:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.250 16:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.250 "name": "Existed_Raid", 00:16:06.250 "uuid": "da363116-f113-4a54-a8a3-75ac73c783f5", 00:16:06.250 "strip_size_kb": 0, 00:16:06.250 "state": "configuring", 00:16:06.250 "raid_level": "raid1", 00:16:06.250 "superblock": true, 00:16:06.250 "num_base_bdevs": 2, 00:16:06.250 "num_base_bdevs_discovered": 1, 00:16:06.250 "num_base_bdevs_operational": 2, 00:16:06.250 "base_bdevs_list": [ 00:16:06.250 { 00:16:06.250 "name": "BaseBdev1", 00:16:06.250 "uuid": "cd208d23-1677-4b16-8935-a64c6119458c", 00:16:06.250 "is_configured": true, 00:16:06.250 "data_offset": 2048, 00:16:06.250 "data_size": 63488 00:16:06.250 }, 00:16:06.250 { 00:16:06.250 "name": "BaseBdev2", 00:16:06.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.250 "is_configured": false, 00:16:06.250 "data_offset": 0, 00:16:06.250 "data_size": 0 00:16:06.250 } 00:16:06.250 ] 00:16:06.250 }' 00:16:06.250 16:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.250 16:33:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.818 16:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:07.077 [2024-07-24 16:33:03.870826] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:07.077 [2024-07-24 16:33:03.870881] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:16:07.077 16:33:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:07.340 [2024-07-24 16:33:04.099530] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.340 [2024-07-24 16:33:04.101836] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.340 [2024-07-24 16:33:04.101881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.340 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.603 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.603 "name": "Existed_Raid", 00:16:07.603 "uuid": "ae81ec3f-6d3c-4cd6-98fc-7e49e1796ce0", 00:16:07.603 "strip_size_kb": 0, 00:16:07.603 "state": "configuring", 00:16:07.603 "raid_level": "raid1", 00:16:07.603 "superblock": true, 00:16:07.603 "num_base_bdevs": 2, 00:16:07.603 "num_base_bdevs_discovered": 1, 00:16:07.603 "num_base_bdevs_operational": 2, 00:16:07.603 "base_bdevs_list": [ 00:16:07.604 { 00:16:07.604 "name": "BaseBdev1", 00:16:07.604 "uuid": "cd208d23-1677-4b16-8935-a64c6119458c", 00:16:07.604 "is_configured": true, 00:16:07.604 "data_offset": 2048, 00:16:07.604 "data_size": 63488 00:16:07.604 }, 00:16:07.604 { 00:16:07.604 "name": "BaseBdev2", 00:16:07.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.604 "is_configured": false, 00:16:07.604 "data_offset": 0, 00:16:07.604 "data_size": 0 00:16:07.604 } 00:16:07.604 ] 00:16:07.604 }' 00:16:07.604 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.604 16:33:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.171 16:33:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.431 [2024-07-24 16:33:05.137025] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.431 [2024-07-24 16:33:05.137338] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:08.431 [2024-07-24 16:33:05.137363] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:08.431 [2024-07-24 16:33:05.137691] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:08.431 [2024-07-24 16:33:05.137909] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:08.431 [2024-07-24 16:33:05.137928] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:08.431 BaseBdev2 00:16:08.431 [2024-07-24 16:33:05.138108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:08.431 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.690 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.949 [ 00:16:08.949 { 00:16:08.949 "name": "BaseBdev2", 00:16:08.949 "aliases": [ 00:16:08.949 "b0c6f3c3-e812-437a-aa11-5ee042af0dcd" 00:16:08.949 ], 00:16:08.949 "product_name": "Malloc disk", 00:16:08.949 "block_size": 512, 00:16:08.949 "num_blocks": 65536, 00:16:08.949 "uuid": "b0c6f3c3-e812-437a-aa11-5ee042af0dcd", 00:16:08.949 "assigned_rate_limits": { 00:16:08.949 "rw_ios_per_sec": 0, 00:16:08.949 "rw_mbytes_per_sec": 0, 00:16:08.949 "r_mbytes_per_sec": 0, 00:16:08.949 "w_mbytes_per_sec": 0 00:16:08.949 }, 00:16:08.949 "claimed": true, 00:16:08.949 "claim_type": "exclusive_write", 00:16:08.949 "zoned": false, 00:16:08.949 "supported_io_types": { 00:16:08.949 "read": true, 00:16:08.949 "write": true, 00:16:08.949 "unmap": true, 00:16:08.949 "flush": true, 00:16:08.949 "reset": true, 00:16:08.949 "nvme_admin": false, 00:16:08.949 "nvme_io": false, 00:16:08.949 "nvme_io_md": false, 00:16:08.949 "write_zeroes": true, 00:16:08.949 "zcopy": true, 00:16:08.949 "get_zone_info": false, 00:16:08.949 "zone_management": false, 00:16:08.949 "zone_append": false, 00:16:08.949 "compare": false, 00:16:08.949 "compare_and_write": false, 00:16:08.949 "abort": true, 00:16:08.949 "seek_hole": false, 00:16:08.949 "seek_data": false, 00:16:08.949 "copy": true, 00:16:08.949 "nvme_iov_md": false 00:16:08.949 }, 00:16:08.949 "memory_domains": [ 00:16:08.949 { 00:16:08.949 "dma_device_id": "system", 00:16:08.949 "dma_device_type": 1 00:16:08.949 }, 00:16:08.949 { 00:16:08.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.949 "dma_device_type": 2 00:16:08.949 } 00:16:08.949 ], 00:16:08.949 "driver_specific": {} 00:16:08.949 } 00:16:08.949 ] 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.949 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:08.950 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.950 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.950 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.950 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.950 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.950 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.209 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.209 "name": "Existed_Raid", 00:16:09.209 "uuid": "ae81ec3f-6d3c-4cd6-98fc-7e49e1796ce0", 00:16:09.209 "strip_size_kb": 0, 00:16:09.209 "state": "online", 00:16:09.209 "raid_level": "raid1", 00:16:09.209 "superblock": true, 00:16:09.209 "num_base_bdevs": 2, 00:16:09.209 "num_base_bdevs_discovered": 2, 00:16:09.209 "num_base_bdevs_operational": 2, 00:16:09.209 "base_bdevs_list": [ 00:16:09.209 { 00:16:09.209 "name": "BaseBdev1", 00:16:09.209 "uuid": "cd208d23-1677-4b16-8935-a64c6119458c", 00:16:09.209 "is_configured": true, 00:16:09.209 "data_offset": 2048, 00:16:09.209 "data_size": 63488 00:16:09.209 }, 00:16:09.209 { 00:16:09.209 "name": "BaseBdev2", 00:16:09.209 "uuid": "b0c6f3c3-e812-437a-aa11-5ee042af0dcd", 00:16:09.209 "is_configured": true, 00:16:09.209 "data_offset": 2048, 00:16:09.209 "data_size": 63488 00:16:09.209 } 00:16:09.209 ] 00:16:09.209 }' 00:16:09.209 16:33:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.209 16:33:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:09.779 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:09.779 [2024-07-24 16:33:06.621449] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.039 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:10.039 "name": "Existed_Raid", 00:16:10.039 "aliases": [ 00:16:10.039 "ae81ec3f-6d3c-4cd6-98fc-7e49e1796ce0" 00:16:10.039 ], 00:16:10.039 "product_name": "Raid Volume", 00:16:10.039 "block_size": 512, 00:16:10.039 "num_blocks": 63488, 00:16:10.039 "uuid": "ae81ec3f-6d3c-4cd6-98fc-7e49e1796ce0", 00:16:10.039 "assigned_rate_limits": { 00:16:10.039 "rw_ios_per_sec": 0, 00:16:10.039 "rw_mbytes_per_sec": 0, 00:16:10.039 "r_mbytes_per_sec": 0, 00:16:10.039 "w_mbytes_per_sec": 0 00:16:10.039 }, 00:16:10.039 "claimed": false, 00:16:10.039 "zoned": false, 00:16:10.039 "supported_io_types": { 00:16:10.039 "read": true, 00:16:10.039 "write": true, 00:16:10.039 "unmap": false, 00:16:10.039 "flush": false, 00:16:10.039 "reset": true, 00:16:10.039 "nvme_admin": false, 00:16:10.039 "nvme_io": false, 00:16:10.039 "nvme_io_md": false, 00:16:10.039 "write_zeroes": true, 00:16:10.039 "zcopy": false, 00:16:10.039 "get_zone_info": false, 00:16:10.039 "zone_management": false, 00:16:10.039 "zone_append": false, 00:16:10.039 "compare": false, 00:16:10.039 "compare_and_write": false, 00:16:10.039 "abort": false, 00:16:10.039 "seek_hole": false, 00:16:10.039 "seek_data": false, 00:16:10.039 "copy": false, 00:16:10.039 "nvme_iov_md": false 00:16:10.039 }, 00:16:10.039 "memory_domains": [ 00:16:10.039 { 00:16:10.039 "dma_device_id": "system", 00:16:10.039 "dma_device_type": 1 00:16:10.039 }, 00:16:10.039 { 00:16:10.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.039 "dma_device_type": 2 00:16:10.039 }, 00:16:10.039 { 00:16:10.039 "dma_device_id": "system", 00:16:10.039 "dma_device_type": 1 00:16:10.039 }, 00:16:10.039 { 00:16:10.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.039 "dma_device_type": 2 00:16:10.039 } 00:16:10.039 ], 00:16:10.039 "driver_specific": { 00:16:10.039 "raid": { 00:16:10.039 "uuid": "ae81ec3f-6d3c-4cd6-98fc-7e49e1796ce0", 00:16:10.039 "strip_size_kb": 0, 00:16:10.039 "state": "online", 00:16:10.039 "raid_level": "raid1", 00:16:10.039 "superblock": true, 00:16:10.039 "num_base_bdevs": 2, 00:16:10.039 "num_base_bdevs_discovered": 2, 00:16:10.039 "num_base_bdevs_operational": 2, 00:16:10.039 "base_bdevs_list": [ 00:16:10.039 { 00:16:10.039 "name": "BaseBdev1", 00:16:10.039 "uuid": "cd208d23-1677-4b16-8935-a64c6119458c", 00:16:10.039 "is_configured": true, 00:16:10.039 "data_offset": 2048, 00:16:10.039 "data_size": 63488 00:16:10.039 }, 00:16:10.039 { 00:16:10.039 "name": "BaseBdev2", 00:16:10.039 "uuid": "b0c6f3c3-e812-437a-aa11-5ee042af0dcd", 00:16:10.039 "is_configured": true, 00:16:10.039 "data_offset": 2048, 00:16:10.039 "data_size": 63488 00:16:10.039 } 00:16:10.039 ] 00:16:10.039 } 00:16:10.039 } 00:16:10.039 }' 00:16:10.039 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.039 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:10.039 BaseBdev2' 00:16:10.039 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.039 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:10.039 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.298 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.298 "name": "BaseBdev1", 00:16:10.298 "aliases": [ 00:16:10.298 "cd208d23-1677-4b16-8935-a64c6119458c" 00:16:10.298 ], 00:16:10.298 "product_name": "Malloc disk", 00:16:10.298 "block_size": 512, 00:16:10.298 "num_blocks": 65536, 00:16:10.298 "uuid": "cd208d23-1677-4b16-8935-a64c6119458c", 00:16:10.298 "assigned_rate_limits": { 00:16:10.298 "rw_ios_per_sec": 0, 00:16:10.298 "rw_mbytes_per_sec": 0, 00:16:10.298 "r_mbytes_per_sec": 0, 00:16:10.298 "w_mbytes_per_sec": 0 00:16:10.298 }, 00:16:10.298 "claimed": true, 00:16:10.298 "claim_type": "exclusive_write", 00:16:10.298 "zoned": false, 00:16:10.298 "supported_io_types": { 00:16:10.298 "read": true, 00:16:10.298 "write": true, 00:16:10.298 "unmap": true, 00:16:10.298 "flush": true, 00:16:10.298 "reset": true, 00:16:10.298 "nvme_admin": false, 00:16:10.298 "nvme_io": false, 00:16:10.298 "nvme_io_md": false, 00:16:10.298 "write_zeroes": true, 00:16:10.298 "zcopy": true, 00:16:10.298 "get_zone_info": false, 00:16:10.298 "zone_management": false, 00:16:10.298 "zone_append": false, 00:16:10.298 "compare": false, 00:16:10.298 "compare_and_write": false, 00:16:10.298 "abort": true, 00:16:10.298 "seek_hole": false, 00:16:10.298 "seek_data": false, 00:16:10.298 "copy": true, 00:16:10.298 "nvme_iov_md": false 00:16:10.298 }, 00:16:10.298 "memory_domains": [ 00:16:10.298 { 00:16:10.298 "dma_device_id": "system", 00:16:10.298 "dma_device_type": 1 00:16:10.298 }, 00:16:10.298 { 00:16:10.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.298 "dma_device_type": 2 00:16:10.298 } 00:16:10.298 ], 00:16:10.298 "driver_specific": {} 00:16:10.298 }' 00:16:10.298 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.298 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.298 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.298 16:33:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.298 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.298 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.298 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.298 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.298 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.298 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.557 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:10.557 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:10.557 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:10.557 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:10.557 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:10.816 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:10.816 "name": "BaseBdev2", 00:16:10.816 "aliases": [ 00:16:10.816 "b0c6f3c3-e812-437a-aa11-5ee042af0dcd" 00:16:10.816 ], 00:16:10.816 "product_name": "Malloc disk", 00:16:10.816 "block_size": 512, 00:16:10.816 "num_blocks": 65536, 00:16:10.816 "uuid": "b0c6f3c3-e812-437a-aa11-5ee042af0dcd", 00:16:10.816 "assigned_rate_limits": { 00:16:10.816 "rw_ios_per_sec": 0, 00:16:10.816 "rw_mbytes_per_sec": 0, 00:16:10.816 "r_mbytes_per_sec": 0, 00:16:10.816 "w_mbytes_per_sec": 0 00:16:10.816 }, 00:16:10.816 "claimed": true, 00:16:10.816 "claim_type": "exclusive_write", 00:16:10.816 "zoned": false, 00:16:10.816 "supported_io_types": { 00:16:10.816 "read": true, 00:16:10.816 "write": true, 00:16:10.816 "unmap": true, 00:16:10.816 "flush": true, 00:16:10.816 "reset": true, 00:16:10.816 "nvme_admin": false, 00:16:10.816 "nvme_io": false, 00:16:10.816 "nvme_io_md": false, 00:16:10.816 "write_zeroes": true, 00:16:10.816 "zcopy": true, 00:16:10.816 "get_zone_info": false, 00:16:10.816 "zone_management": false, 00:16:10.816 "zone_append": false, 00:16:10.816 "compare": false, 00:16:10.816 "compare_and_write": false, 00:16:10.816 "abort": true, 00:16:10.816 "seek_hole": false, 00:16:10.816 "seek_data": false, 00:16:10.816 "copy": true, 00:16:10.816 "nvme_iov_md": false 00:16:10.816 }, 00:16:10.816 "memory_domains": [ 00:16:10.816 { 00:16:10.816 "dma_device_id": "system", 00:16:10.816 "dma_device_type": 1 00:16:10.816 }, 00:16:10.816 { 00:16:10.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.816 "dma_device_type": 2 00:16:10.816 } 00:16:10.816 ], 00:16:10.816 "driver_specific": {} 00:16:10.816 }' 00:16:10.816 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.816 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:10.816 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:10.816 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.816 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:10.817 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:10.817 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.817 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:10.817 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:10.817 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.075 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:11.075 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:11.075 16:33:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:11.334 [2024-07-24 16:33:07.969020] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.334 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.594 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.594 "name": "Existed_Raid", 00:16:11.594 "uuid": "ae81ec3f-6d3c-4cd6-98fc-7e49e1796ce0", 00:16:11.594 "strip_size_kb": 0, 00:16:11.594 "state": "online", 00:16:11.594 "raid_level": "raid1", 00:16:11.594 "superblock": true, 00:16:11.594 "num_base_bdevs": 2, 00:16:11.594 "num_base_bdevs_discovered": 1, 00:16:11.594 "num_base_bdevs_operational": 1, 00:16:11.594 "base_bdevs_list": [ 00:16:11.594 { 00:16:11.594 "name": null, 00:16:11.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.594 "is_configured": false, 00:16:11.594 "data_offset": 2048, 00:16:11.594 "data_size": 63488 00:16:11.594 }, 00:16:11.594 { 00:16:11.594 "name": "BaseBdev2", 00:16:11.594 "uuid": "b0c6f3c3-e812-437a-aa11-5ee042af0dcd", 00:16:11.594 "is_configured": true, 00:16:11.594 "data_offset": 2048, 00:16:11.594 "data_size": 63488 00:16:11.594 } 00:16:11.594 ] 00:16:11.594 }' 00:16:11.594 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.594 16:33:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.161 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:12.161 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.161 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.161 16:33:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:12.420 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:12.420 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:12.420 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:12.420 [2024-07-24 16:33:09.241149] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:12.420 [2024-07-24 16:33:09.241266] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.678 [2024-07-24 16:33:09.369963] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.678 [2024-07-24 16:33:09.370019] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.678 [2024-07-24 16:33:09.370039] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:12.678 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:12.678 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:12.678 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.678 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1616693 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1616693 ']' 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1616693 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616693 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616693' 00:16:12.937 killing process with pid 1616693 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1616693 00:16:12.937 [2024-07-24 16:33:09.674696] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.937 16:33:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1616693 00:16:12.937 [2024-07-24 16:33:09.698066] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.907 16:33:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:14.907 00:16:14.907 real 0m12.127s 00:16:14.907 user 0m19.510s 00:16:14.907 sys 0m2.142s 00:16:14.907 16:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.907 16:33:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.907 ************************************ 00:16:14.907 END TEST raid_state_function_test_sb 00:16:14.907 ************************************ 00:16:14.907 16:33:11 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:14.907 16:33:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:14.907 16:33:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.907 16:33:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.907 ************************************ 00:16:14.907 START TEST raid_superblock_test 00:16:14.907 ************************************ 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1619567 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1619567 /var/tmp/spdk-raid.sock 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1619567 ']' 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:14.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.907 16:33:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.907 [2024-07-24 16:33:11.611605] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:16:14.907 [2024-07-24 16:33:11.611724] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1619567 ] 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:14.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:14.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:15.167 [2024-07-24 16:33:11.837366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.426 [2024-07-24 16:33:12.105245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.685 [2024-07-24 16:33:12.439793] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.685 [2024-07-24 16:33:12.439827] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.944 16:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.944 16:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:15.944 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:15.944 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.945 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:16.203 malloc1 00:16:16.203 16:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.462 [2024-07-24 16:33:13.115460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.463 [2024-07-24 16:33:13.115524] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.463 [2024-07-24 16:33:13.115554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:16:16.463 [2024-07-24 16:33:13.115576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.463 [2024-07-24 16:33:13.118327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.463 [2024-07-24 16:33:13.118361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.463 pt1 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.463 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:16.723 malloc2 00:16:16.723 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.981 [2024-07-24 16:33:13.622374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.981 [2024-07-24 16:33:13.622431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.981 [2024-07-24 16:33:13.622460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:16:16.981 [2024-07-24 16:33:13.622477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.981 [2024-07-24 16:33:13.625258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.981 [2024-07-24 16:33:13.625297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.981 pt2 00:16:16.981 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:16.981 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:16.981 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:17.240 [2024-07-24 16:33:13.851010] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:17.240 [2024-07-24 16:33:13.853386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.240 [2024-07-24 16:33:13.853608] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:16:17.240 [2024-07-24 16:33:13.853630] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:17.240 [2024-07-24 16:33:13.853975] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:17.240 [2024-07-24 16:33:13.854219] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:16:17.240 [2024-07-24 16:33:13.854239] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:16:17.240 [2024-07-24 16:33:13.854430] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.240 16:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.499 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.499 "name": "raid_bdev1", 00:16:17.499 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:17.499 "strip_size_kb": 0, 00:16:17.499 "state": "online", 00:16:17.499 "raid_level": "raid1", 00:16:17.499 "superblock": true, 00:16:17.499 "num_base_bdevs": 2, 00:16:17.499 "num_base_bdevs_discovered": 2, 00:16:17.499 "num_base_bdevs_operational": 2, 00:16:17.499 "base_bdevs_list": [ 00:16:17.499 { 00:16:17.499 "name": "pt1", 00:16:17.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.499 "is_configured": true, 00:16:17.499 "data_offset": 2048, 00:16:17.499 "data_size": 63488 00:16:17.499 }, 00:16:17.499 { 00:16:17.499 "name": "pt2", 00:16:17.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.499 "is_configured": true, 00:16:17.499 "data_offset": 2048, 00:16:17.499 "data_size": 63488 00:16:17.499 } 00:16:17.499 ] 00:16:17.499 }' 00:16:17.500 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.500 16:33:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:18.068 [2024-07-24 16:33:14.781795] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:18.068 "name": "raid_bdev1", 00:16:18.068 "aliases": [ 00:16:18.068 "22061426-4518-4280-b24a-a4fa750d18d8" 00:16:18.068 ], 00:16:18.068 "product_name": "Raid Volume", 00:16:18.068 "block_size": 512, 00:16:18.068 "num_blocks": 63488, 00:16:18.068 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:18.068 "assigned_rate_limits": { 00:16:18.068 "rw_ios_per_sec": 0, 00:16:18.068 "rw_mbytes_per_sec": 0, 00:16:18.068 "r_mbytes_per_sec": 0, 00:16:18.068 "w_mbytes_per_sec": 0 00:16:18.068 }, 00:16:18.068 "claimed": false, 00:16:18.068 "zoned": false, 00:16:18.068 "supported_io_types": { 00:16:18.068 "read": true, 00:16:18.068 "write": true, 00:16:18.068 "unmap": false, 00:16:18.068 "flush": false, 00:16:18.068 "reset": true, 00:16:18.068 "nvme_admin": false, 00:16:18.068 "nvme_io": false, 00:16:18.068 "nvme_io_md": false, 00:16:18.068 "write_zeroes": true, 00:16:18.068 "zcopy": false, 00:16:18.068 "get_zone_info": false, 00:16:18.068 "zone_management": false, 00:16:18.068 "zone_append": false, 00:16:18.068 "compare": false, 00:16:18.068 "compare_and_write": false, 00:16:18.068 "abort": false, 00:16:18.068 "seek_hole": false, 00:16:18.068 "seek_data": false, 00:16:18.068 "copy": false, 00:16:18.068 "nvme_iov_md": false 00:16:18.068 }, 00:16:18.068 "memory_domains": [ 00:16:18.068 { 00:16:18.068 "dma_device_id": "system", 00:16:18.068 "dma_device_type": 1 00:16:18.068 }, 00:16:18.068 { 00:16:18.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.068 "dma_device_type": 2 00:16:18.068 }, 00:16:18.068 { 00:16:18.068 "dma_device_id": "system", 00:16:18.068 "dma_device_type": 1 00:16:18.068 }, 00:16:18.068 { 00:16:18.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.068 "dma_device_type": 2 00:16:18.068 } 00:16:18.068 ], 00:16:18.068 "driver_specific": { 00:16:18.068 "raid": { 00:16:18.068 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:18.068 "strip_size_kb": 0, 00:16:18.068 "state": "online", 00:16:18.068 "raid_level": "raid1", 00:16:18.068 "superblock": true, 00:16:18.068 "num_base_bdevs": 2, 00:16:18.068 "num_base_bdevs_discovered": 2, 00:16:18.068 "num_base_bdevs_operational": 2, 00:16:18.068 "base_bdevs_list": [ 00:16:18.068 { 00:16:18.068 "name": "pt1", 00:16:18.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.068 "is_configured": true, 00:16:18.068 "data_offset": 2048, 00:16:18.068 "data_size": 63488 00:16:18.068 }, 00:16:18.068 { 00:16:18.068 "name": "pt2", 00:16:18.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.068 "is_configured": true, 00:16:18.068 "data_offset": 2048, 00:16:18.068 "data_size": 63488 00:16:18.068 } 00:16:18.068 ] 00:16:18.068 } 00:16:18.068 } 00:16:18.068 }' 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:18.068 pt2' 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:18.068 16:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.327 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.327 "name": "pt1", 00:16:18.327 "aliases": [ 00:16:18.327 "00000000-0000-0000-0000-000000000001" 00:16:18.327 ], 00:16:18.327 "product_name": "passthru", 00:16:18.327 "block_size": 512, 00:16:18.327 "num_blocks": 65536, 00:16:18.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.327 "assigned_rate_limits": { 00:16:18.327 "rw_ios_per_sec": 0, 00:16:18.327 "rw_mbytes_per_sec": 0, 00:16:18.327 "r_mbytes_per_sec": 0, 00:16:18.327 "w_mbytes_per_sec": 0 00:16:18.327 }, 00:16:18.327 "claimed": true, 00:16:18.327 "claim_type": "exclusive_write", 00:16:18.327 "zoned": false, 00:16:18.327 "supported_io_types": { 00:16:18.327 "read": true, 00:16:18.327 "write": true, 00:16:18.327 "unmap": true, 00:16:18.327 "flush": true, 00:16:18.327 "reset": true, 00:16:18.328 "nvme_admin": false, 00:16:18.328 "nvme_io": false, 00:16:18.328 "nvme_io_md": false, 00:16:18.328 "write_zeroes": true, 00:16:18.328 "zcopy": true, 00:16:18.328 "get_zone_info": false, 00:16:18.328 "zone_management": false, 00:16:18.328 "zone_append": false, 00:16:18.328 "compare": false, 00:16:18.328 "compare_and_write": false, 00:16:18.328 "abort": true, 00:16:18.328 "seek_hole": false, 00:16:18.328 "seek_data": false, 00:16:18.328 "copy": true, 00:16:18.328 "nvme_iov_md": false 00:16:18.328 }, 00:16:18.328 "memory_domains": [ 00:16:18.328 { 00:16:18.328 "dma_device_id": "system", 00:16:18.328 "dma_device_type": 1 00:16:18.328 }, 00:16:18.328 { 00:16:18.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.328 "dma_device_type": 2 00:16:18.328 } 00:16:18.328 ], 00:16:18.328 "driver_specific": { 00:16:18.328 "passthru": { 00:16:18.328 "name": "pt1", 00:16:18.328 "base_bdev_name": "malloc1" 00:16:18.328 } 00:16:18.328 } 00:16:18.328 }' 00:16:18.328 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.328 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.328 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.328 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:18.586 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.845 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.846 "name": "pt2", 00:16:18.846 "aliases": [ 00:16:18.846 "00000000-0000-0000-0000-000000000002" 00:16:18.846 ], 00:16:18.846 "product_name": "passthru", 00:16:18.846 "block_size": 512, 00:16:18.846 "num_blocks": 65536, 00:16:18.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.846 "assigned_rate_limits": { 00:16:18.846 "rw_ios_per_sec": 0, 00:16:18.846 "rw_mbytes_per_sec": 0, 00:16:18.846 "r_mbytes_per_sec": 0, 00:16:18.846 "w_mbytes_per_sec": 0 00:16:18.846 }, 00:16:18.846 "claimed": true, 00:16:18.846 "claim_type": "exclusive_write", 00:16:18.846 "zoned": false, 00:16:18.846 "supported_io_types": { 00:16:18.846 "read": true, 00:16:18.846 "write": true, 00:16:18.846 "unmap": true, 00:16:18.846 "flush": true, 00:16:18.846 "reset": true, 00:16:18.846 "nvme_admin": false, 00:16:18.846 "nvme_io": false, 00:16:18.846 "nvme_io_md": false, 00:16:18.846 "write_zeroes": true, 00:16:18.846 "zcopy": true, 00:16:18.846 "get_zone_info": false, 00:16:18.846 "zone_management": false, 00:16:18.846 "zone_append": false, 00:16:18.846 "compare": false, 00:16:18.846 "compare_and_write": false, 00:16:18.846 "abort": true, 00:16:18.846 "seek_hole": false, 00:16:18.846 "seek_data": false, 00:16:18.846 "copy": true, 00:16:18.846 "nvme_iov_md": false 00:16:18.846 }, 00:16:18.846 "memory_domains": [ 00:16:18.846 { 00:16:18.846 "dma_device_id": "system", 00:16:18.846 "dma_device_type": 1 00:16:18.846 }, 00:16:18.846 { 00:16:18.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.846 "dma_device_type": 2 00:16:18.846 } 00:16:18.846 ], 00:16:18.846 "driver_specific": { 00:16:18.846 "passthru": { 00:16:18.846 "name": "pt2", 00:16:18.846 "base_bdev_name": "malloc2" 00:16:18.846 } 00:16:18.846 } 00:16:18.846 }' 00:16:18.846 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.846 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.105 16:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:19.363 [2024-07-24 16:33:16.153535] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.363 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=22061426-4518-4280-b24a-a4fa750d18d8 00:16:19.363 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 22061426-4518-4280-b24a-a4fa750d18d8 ']' 00:16:19.363 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.622 [2024-07-24 16:33:16.381818] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.622 [2024-07-24 16:33:16.381850] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.622 [2024-07-24 16:33:16.381932] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.622 [2024-07-24 16:33:16.382003] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.622 [2024-07-24 16:33:16.382029] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:16:19.622 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.622 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:19.881 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:19.881 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:19.881 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.881 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:20.140 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.140 16:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:20.399 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:20.399 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:20.658 [2024-07-24 16:33:17.496801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:20.658 [2024-07-24 16:33:17.499110] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:20.658 [2024-07-24 16:33:17.499191] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:20.658 [2024-07-24 16:33:17.499249] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:20.658 [2024-07-24 16:33:17.499273] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.658 [2024-07-24 16:33:17.499289] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:16:20.658 request: 00:16:20.658 { 00:16:20.658 "name": "raid_bdev1", 00:16:20.658 "raid_level": "raid1", 00:16:20.658 "base_bdevs": [ 00:16:20.658 "malloc1", 00:16:20.658 "malloc2" 00:16:20.658 ], 00:16:20.658 "superblock": false, 00:16:20.658 "method": "bdev_raid_create", 00:16:20.658 "req_id": 1 00:16:20.658 } 00:16:20.658 Got JSON-RPC error response 00:16:20.658 response: 00:16:20.658 { 00:16:20.658 "code": -17, 00:16:20.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:20.658 } 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.658 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:20.917 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:20.917 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:20.918 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.177 [2024-07-24 16:33:17.937908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.177 [2024-07-24 16:33:17.937971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.177 [2024-07-24 16:33:17.937995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:16:21.177 [2024-07-24 16:33:17.938023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.177 [2024-07-24 16:33:17.940809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.177 [2024-07-24 16:33:17.940847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.177 [2024-07-24 16:33:17.940939] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.177 [2024-07-24 16:33:17.941030] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.177 pt1 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.177 16:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.436 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.436 "name": "raid_bdev1", 00:16:21.436 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:21.436 "strip_size_kb": 0, 00:16:21.436 "state": "configuring", 00:16:21.436 "raid_level": "raid1", 00:16:21.436 "superblock": true, 00:16:21.436 "num_base_bdevs": 2, 00:16:21.436 "num_base_bdevs_discovered": 1, 00:16:21.436 "num_base_bdevs_operational": 2, 00:16:21.436 "base_bdevs_list": [ 00:16:21.436 { 00:16:21.436 "name": "pt1", 00:16:21.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.436 "is_configured": true, 00:16:21.436 "data_offset": 2048, 00:16:21.436 "data_size": 63488 00:16:21.436 }, 00:16:21.436 { 00:16:21.436 "name": null, 00:16:21.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.436 "is_configured": false, 00:16:21.436 "data_offset": 2048, 00:16:21.436 "data_size": 63488 00:16:21.436 } 00:16:21.436 ] 00:16:21.436 }' 00:16:21.436 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.436 16:33:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.004 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:16:22.004 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:22.004 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:22.004 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.263 [2024-07-24 16:33:18.952651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.263 [2024-07-24 16:33:18.952722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.263 [2024-07-24 16:33:18.952747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:16:22.263 [2024-07-24 16:33:18.952768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.263 [2024-07-24 16:33:18.953356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.263 [2024-07-24 16:33:18.953385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.263 [2024-07-24 16:33:18.953478] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.263 [2024-07-24 16:33:18.953513] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.263 [2024-07-24 16:33:18.953674] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:16:22.263 [2024-07-24 16:33:18.953692] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.263 [2024-07-24 16:33:18.953987] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:22.263 [2024-07-24 16:33:18.954217] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:16:22.263 [2024-07-24 16:33:18.954232] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:16:22.263 [2024-07-24 16:33:18.954410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.263 pt2 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.263 16:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.522 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.522 "name": "raid_bdev1", 00:16:22.522 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:22.522 "strip_size_kb": 0, 00:16:22.522 "state": "online", 00:16:22.522 "raid_level": "raid1", 00:16:22.522 "superblock": true, 00:16:22.522 "num_base_bdevs": 2, 00:16:22.522 "num_base_bdevs_discovered": 2, 00:16:22.522 "num_base_bdevs_operational": 2, 00:16:22.522 "base_bdevs_list": [ 00:16:22.522 { 00:16:22.522 "name": "pt1", 00:16:22.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.523 "is_configured": true, 00:16:22.523 "data_offset": 2048, 00:16:22.523 "data_size": 63488 00:16:22.523 }, 00:16:22.523 { 00:16:22.523 "name": "pt2", 00:16:22.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.523 "is_configured": true, 00:16:22.523 "data_offset": 2048, 00:16:22.523 "data_size": 63488 00:16:22.523 } 00:16:22.523 ] 00:16:22.523 }' 00:16:22.523 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.523 16:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:23.090 [2024-07-24 16:33:19.931642] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:23.090 "name": "raid_bdev1", 00:16:23.090 "aliases": [ 00:16:23.090 "22061426-4518-4280-b24a-a4fa750d18d8" 00:16:23.090 ], 00:16:23.090 "product_name": "Raid Volume", 00:16:23.090 "block_size": 512, 00:16:23.090 "num_blocks": 63488, 00:16:23.090 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:23.090 "assigned_rate_limits": { 00:16:23.090 "rw_ios_per_sec": 0, 00:16:23.090 "rw_mbytes_per_sec": 0, 00:16:23.090 "r_mbytes_per_sec": 0, 00:16:23.090 "w_mbytes_per_sec": 0 00:16:23.090 }, 00:16:23.090 "claimed": false, 00:16:23.090 "zoned": false, 00:16:23.090 "supported_io_types": { 00:16:23.090 "read": true, 00:16:23.090 "write": true, 00:16:23.090 "unmap": false, 00:16:23.090 "flush": false, 00:16:23.090 "reset": true, 00:16:23.090 "nvme_admin": false, 00:16:23.090 "nvme_io": false, 00:16:23.090 "nvme_io_md": false, 00:16:23.090 "write_zeroes": true, 00:16:23.090 "zcopy": false, 00:16:23.090 "get_zone_info": false, 00:16:23.090 "zone_management": false, 00:16:23.090 "zone_append": false, 00:16:23.090 "compare": false, 00:16:23.090 "compare_and_write": false, 00:16:23.090 "abort": false, 00:16:23.090 "seek_hole": false, 00:16:23.090 "seek_data": false, 00:16:23.090 "copy": false, 00:16:23.090 "nvme_iov_md": false 00:16:23.090 }, 00:16:23.090 "memory_domains": [ 00:16:23.090 { 00:16:23.090 "dma_device_id": "system", 00:16:23.090 "dma_device_type": 1 00:16:23.090 }, 00:16:23.090 { 00:16:23.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.090 "dma_device_type": 2 00:16:23.090 }, 00:16:23.090 { 00:16:23.090 "dma_device_id": "system", 00:16:23.090 "dma_device_type": 1 00:16:23.090 }, 00:16:23.090 { 00:16:23.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.090 "dma_device_type": 2 00:16:23.090 } 00:16:23.090 ], 00:16:23.090 "driver_specific": { 00:16:23.090 "raid": { 00:16:23.090 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:23.090 "strip_size_kb": 0, 00:16:23.090 "state": "online", 00:16:23.090 "raid_level": "raid1", 00:16:23.090 "superblock": true, 00:16:23.090 "num_base_bdevs": 2, 00:16:23.090 "num_base_bdevs_discovered": 2, 00:16:23.090 "num_base_bdevs_operational": 2, 00:16:23.090 "base_bdevs_list": [ 00:16:23.090 { 00:16:23.090 "name": "pt1", 00:16:23.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.090 "is_configured": true, 00:16:23.090 "data_offset": 2048, 00:16:23.090 "data_size": 63488 00:16:23.090 }, 00:16:23.090 { 00:16:23.090 "name": "pt2", 00:16:23.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.090 "is_configured": true, 00:16:23.090 "data_offset": 2048, 00:16:23.090 "data_size": 63488 00:16:23.090 } 00:16:23.090 ] 00:16:23.090 } 00:16:23.090 } 00:16:23.090 }' 00:16:23.090 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.349 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:23.349 pt2' 00:16:23.349 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.349 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:23.349 16:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.607 "name": "pt1", 00:16:23.607 "aliases": [ 00:16:23.607 "00000000-0000-0000-0000-000000000001" 00:16:23.607 ], 00:16:23.607 "product_name": "passthru", 00:16:23.607 "block_size": 512, 00:16:23.607 "num_blocks": 65536, 00:16:23.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.607 "assigned_rate_limits": { 00:16:23.607 "rw_ios_per_sec": 0, 00:16:23.607 "rw_mbytes_per_sec": 0, 00:16:23.607 "r_mbytes_per_sec": 0, 00:16:23.607 "w_mbytes_per_sec": 0 00:16:23.607 }, 00:16:23.607 "claimed": true, 00:16:23.607 "claim_type": "exclusive_write", 00:16:23.607 "zoned": false, 00:16:23.607 "supported_io_types": { 00:16:23.607 "read": true, 00:16:23.607 "write": true, 00:16:23.607 "unmap": true, 00:16:23.607 "flush": true, 00:16:23.607 "reset": true, 00:16:23.607 "nvme_admin": false, 00:16:23.607 "nvme_io": false, 00:16:23.607 "nvme_io_md": false, 00:16:23.607 "write_zeroes": true, 00:16:23.607 "zcopy": true, 00:16:23.607 "get_zone_info": false, 00:16:23.607 "zone_management": false, 00:16:23.607 "zone_append": false, 00:16:23.607 "compare": false, 00:16:23.607 "compare_and_write": false, 00:16:23.607 "abort": true, 00:16:23.607 "seek_hole": false, 00:16:23.607 "seek_data": false, 00:16:23.607 "copy": true, 00:16:23.607 "nvme_iov_md": false 00:16:23.607 }, 00:16:23.607 "memory_domains": [ 00:16:23.607 { 00:16:23.607 "dma_device_id": "system", 00:16:23.607 "dma_device_type": 1 00:16:23.607 }, 00:16:23.607 { 00:16:23.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.607 "dma_device_type": 2 00:16:23.607 } 00:16:23.607 ], 00:16:23.607 "driver_specific": { 00:16:23.607 "passthru": { 00:16:23.607 "name": "pt1", 00:16:23.607 "base_bdev_name": "malloc1" 00:16:23.607 } 00:16:23.607 } 00:16:23.607 }' 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.607 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:23.865 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.124 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.124 "name": "pt2", 00:16:24.124 "aliases": [ 00:16:24.124 "00000000-0000-0000-0000-000000000002" 00:16:24.124 ], 00:16:24.124 "product_name": "passthru", 00:16:24.124 "block_size": 512, 00:16:24.124 "num_blocks": 65536, 00:16:24.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.124 "assigned_rate_limits": { 00:16:24.124 "rw_ios_per_sec": 0, 00:16:24.124 "rw_mbytes_per_sec": 0, 00:16:24.124 "r_mbytes_per_sec": 0, 00:16:24.124 "w_mbytes_per_sec": 0 00:16:24.124 }, 00:16:24.124 "claimed": true, 00:16:24.124 "claim_type": "exclusive_write", 00:16:24.124 "zoned": false, 00:16:24.124 "supported_io_types": { 00:16:24.124 "read": true, 00:16:24.124 "write": true, 00:16:24.124 "unmap": true, 00:16:24.124 "flush": true, 00:16:24.124 "reset": true, 00:16:24.124 "nvme_admin": false, 00:16:24.124 "nvme_io": false, 00:16:24.124 "nvme_io_md": false, 00:16:24.124 "write_zeroes": true, 00:16:24.124 "zcopy": true, 00:16:24.124 "get_zone_info": false, 00:16:24.124 "zone_management": false, 00:16:24.124 "zone_append": false, 00:16:24.124 "compare": false, 00:16:24.124 "compare_and_write": false, 00:16:24.124 "abort": true, 00:16:24.124 "seek_hole": false, 00:16:24.124 "seek_data": false, 00:16:24.124 "copy": true, 00:16:24.124 "nvme_iov_md": false 00:16:24.124 }, 00:16:24.124 "memory_domains": [ 00:16:24.124 { 00:16:24.124 "dma_device_id": "system", 00:16:24.124 "dma_device_type": 1 00:16:24.124 }, 00:16:24.124 { 00:16:24.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.124 "dma_device_type": 2 00:16:24.124 } 00:16:24.124 ], 00:16:24.124 "driver_specific": { 00:16:24.124 "passthru": { 00:16:24.124 "name": "pt2", 00:16:24.124 "base_bdev_name": "malloc2" 00:16:24.124 } 00:16:24.124 } 00:16:24.124 }' 00:16:24.124 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.124 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.124 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.124 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.124 16:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:24.384 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:24.952 [2024-07-24 16:33:21.708436] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.952 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 22061426-4518-4280-b24a-a4fa750d18d8 '!=' 22061426-4518-4280-b24a-a4fa750d18d8 ']' 00:16:24.952 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:16:24.952 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:24.952 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:24.952 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:25.211 [2024-07-24 16:33:21.940759] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.211 16:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.470 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.470 "name": "raid_bdev1", 00:16:25.470 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:25.470 "strip_size_kb": 0, 00:16:25.470 "state": "online", 00:16:25.470 "raid_level": "raid1", 00:16:25.470 "superblock": true, 00:16:25.470 "num_base_bdevs": 2, 00:16:25.470 "num_base_bdevs_discovered": 1, 00:16:25.470 "num_base_bdevs_operational": 1, 00:16:25.470 "base_bdevs_list": [ 00:16:25.470 { 00:16:25.470 "name": null, 00:16:25.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.470 "is_configured": false, 00:16:25.470 "data_offset": 2048, 00:16:25.470 "data_size": 63488 00:16:25.470 }, 00:16:25.470 { 00:16:25.470 "name": "pt2", 00:16:25.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.470 "is_configured": true, 00:16:25.470 "data_offset": 2048, 00:16:25.470 "data_size": 63488 00:16:25.470 } 00:16:25.470 ] 00:16:25.470 }' 00:16:25.470 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.470 16:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.063 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:26.363 [2024-07-24 16:33:22.931399] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.363 [2024-07-24 16:33:22.931432] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.363 [2024-07-24 16:33:22.931511] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.363 [2024-07-24 16:33:22.931565] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.363 [2024-07-24 16:33:22.931584] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:16:26.363 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.363 16:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:16:26.363 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:16:26.363 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:16:26.363 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:16:26.363 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:26.363 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.622 [2024-07-24 16:33:23.452773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.622 [2024-07-24 16:33:23.452843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.622 [2024-07-24 16:33:23.452867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:16:26.622 [2024-07-24 16:33:23.452888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.622 [2024-07-24 16:33:23.455642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.622 [2024-07-24 16:33:23.455678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.622 [2024-07-24 16:33:23.455766] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:26.622 [2024-07-24 16:33:23.455821] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.622 [2024-07-24 16:33:23.455963] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:16:26.622 [2024-07-24 16:33:23.455980] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:26.622 [2024-07-24 16:33:23.456306] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:26.622 [2024-07-24 16:33:23.456548] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:16:26.622 [2024-07-24 16:33:23.456563] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:16:26.622 [2024-07-24 16:33:23.456776] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.622 pt2 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.622 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.881 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.881 "name": "raid_bdev1", 00:16:26.881 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:26.881 "strip_size_kb": 0, 00:16:26.881 "state": "online", 00:16:26.881 "raid_level": "raid1", 00:16:26.881 "superblock": true, 00:16:26.881 "num_base_bdevs": 2, 00:16:26.881 "num_base_bdevs_discovered": 1, 00:16:26.881 "num_base_bdevs_operational": 1, 00:16:26.881 "base_bdevs_list": [ 00:16:26.881 { 00:16:26.881 "name": null, 00:16:26.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.881 "is_configured": false, 00:16:26.881 "data_offset": 2048, 00:16:26.881 "data_size": 63488 00:16:26.881 }, 00:16:26.881 { 00:16:26.881 "name": "pt2", 00:16:26.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.881 "is_configured": true, 00:16:26.881 "data_offset": 2048, 00:16:26.881 "data_size": 63488 00:16:26.881 } 00:16:26.881 ] 00:16:26.881 }' 00:16:26.881 16:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.881 16:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.445 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:27.703 [2024-07-24 16:33:24.435547] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.703 [2024-07-24 16:33:24.435580] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.703 [2024-07-24 16:33:24.435657] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.703 [2024-07-24 16:33:24.435716] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.703 [2024-07-24 16:33:24.435732] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:16:27.704 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.704 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:16:27.961 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:16:27.961 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:16:27.961 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:16:27.961 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:27.961 [2024-07-24 16:33:24.776390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:27.961 [2024-07-24 16:33:24.776448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.961 [2024-07-24 16:33:24.776474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:16:27.961 [2024-07-24 16:33:24.776491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.961 [2024-07-24 16:33:24.779251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.961 [2024-07-24 16:33:24.779282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:27.961 [2024-07-24 16:33:24.779371] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:27.961 [2024-07-24 16:33:24.779458] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.961 [2024-07-24 16:33:24.779642] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:27.961 [2024-07-24 16:33:24.779659] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.962 [2024-07-24 16:33:24.779684] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:16:27.962 [2024-07-24 16:33:24.779759] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.962 [2024-07-24 16:33:24.779844] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:16:27.962 [2024-07-24 16:33:24.779857] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.962 [2024-07-24 16:33:24.780157] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:16:27.962 [2024-07-24 16:33:24.780371] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:16:27.962 [2024-07-24 16:33:24.780388] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:16:27.962 [2024-07-24 16:33:24.780604] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.962 pt1 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.962 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.220 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.220 "name": "raid_bdev1", 00:16:28.220 "uuid": "22061426-4518-4280-b24a-a4fa750d18d8", 00:16:28.220 "strip_size_kb": 0, 00:16:28.220 "state": "online", 00:16:28.220 "raid_level": "raid1", 00:16:28.220 "superblock": true, 00:16:28.220 "num_base_bdevs": 2, 00:16:28.220 "num_base_bdevs_discovered": 1, 00:16:28.220 "num_base_bdevs_operational": 1, 00:16:28.220 "base_bdevs_list": [ 00:16:28.220 { 00:16:28.220 "name": null, 00:16:28.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.220 "is_configured": false, 00:16:28.220 "data_offset": 2048, 00:16:28.220 "data_size": 63488 00:16:28.220 }, 00:16:28.220 { 00:16:28.220 "name": "pt2", 00:16:28.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.220 "is_configured": true, 00:16:28.220 "data_offset": 2048, 00:16:28.220 "data_size": 63488 00:16:28.220 } 00:16:28.220 ] 00:16:28.220 }' 00:16:28.220 16:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.220 16:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.787 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:28.787 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:16:29.046 [2024-07-24 16:33:25.871806] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 22061426-4518-4280-b24a-a4fa750d18d8 '!=' 22061426-4518-4280-b24a-a4fa750d18d8 ']' 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1619567 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1619567 ']' 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1619567 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:29.046 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1619567 00:16:29.305 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:29.305 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:29.306 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1619567' 00:16:29.306 killing process with pid 1619567 00:16:29.306 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1619567 00:16:29.306 [2024-07-24 16:33:25.942876] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.306 [2024-07-24 16:33:25.942970] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.306 [2024-07-24 16:33:25.943025] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.306 16:33:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1619567 00:16:29.306 [2024-07-24 16:33:25.943044] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:16:29.306 [2024-07-24 16:33:26.132228] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.213 16:33:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:31.213 00:16:31.213 real 0m16.271s 00:16:31.213 user 0m27.944s 00:16:31.213 sys 0m2.740s 00:16:31.213 16:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.213 16:33:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.213 ************************************ 00:16:31.213 END TEST raid_superblock_test 00:16:31.213 ************************************ 00:16:31.213 16:33:27 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:31.213 16:33:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:31.213 16:33:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.213 16:33:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.213 ************************************ 00:16:31.213 START TEST raid_read_error_test 00:16:31.213 ************************************ 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.lfWp16msXN 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1622535 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1622535 /var/tmp/spdk-raid.sock 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1622535 ']' 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:31.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.213 16:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.213 [2024-07-24 16:33:27.978114] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:16:31.213 [2024-07-24 16:33:27.978258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622535 ] 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.472 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:31.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.473 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:31.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:31.473 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:31.473 [2024-07-24 16:33:28.205871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.732 [2024-07-24 16:33:28.485118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.991 [2024-07-24 16:33:28.824464] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.991 [2024-07-24 16:33:28.824505] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.250 16:33:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.250 16:33:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:32.250 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:32.250 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:32.509 BaseBdev1_malloc 00:16:32.509 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:32.769 true 00:16:32.769 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:33.028 [2024-07-24 16:33:29.731607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:33.028 [2024-07-24 16:33:29.731667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.028 [2024-07-24 16:33:29.731694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:33.028 [2024-07-24 16:33:29.731719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.028 [2024-07-24 16:33:29.734511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.028 [2024-07-24 16:33:29.734551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:33.028 BaseBdev1 00:16:33.028 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:33.028 16:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:33.288 BaseBdev2_malloc 00:16:33.288 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:33.547 true 00:16:33.547 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:33.807 [2024-07-24 16:33:30.412241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:33.807 [2024-07-24 16:33:30.412303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.807 [2024-07-24 16:33:30.412329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:33.807 [2024-07-24 16:33:30.412350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.807 [2024-07-24 16:33:30.415111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.807 [2024-07-24 16:33:30.415156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:33.807 BaseBdev2 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:33.807 [2024-07-24 16:33:30.624866] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.807 [2024-07-24 16:33:30.627241] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.807 [2024-07-24 16:33:30.627486] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:16:33.807 [2024-07-24 16:33:30.627509] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:33.807 [2024-07-24 16:33:30.627849] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:33.807 [2024-07-24 16:33:30.628117] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:16:33.807 [2024-07-24 16:33:30.628133] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:16:33.807 [2024-07-24 16:33:30.628354] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.807 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.067 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.067 "name": "raid_bdev1", 00:16:34.067 "uuid": "3668baf2-10db-47e4-894c-3fc245b05e6a", 00:16:34.067 "strip_size_kb": 0, 00:16:34.067 "state": "online", 00:16:34.067 "raid_level": "raid1", 00:16:34.067 "superblock": true, 00:16:34.067 "num_base_bdevs": 2, 00:16:34.067 "num_base_bdevs_discovered": 2, 00:16:34.067 "num_base_bdevs_operational": 2, 00:16:34.067 "base_bdevs_list": [ 00:16:34.067 { 00:16:34.067 "name": "BaseBdev1", 00:16:34.067 "uuid": "87609b9f-ca9d-5825-a58c-460e2d4530b1", 00:16:34.067 "is_configured": true, 00:16:34.067 "data_offset": 2048, 00:16:34.067 "data_size": 63488 00:16:34.067 }, 00:16:34.067 { 00:16:34.067 "name": "BaseBdev2", 00:16:34.067 "uuid": "4a3dd8e8-b2df-5159-b74f-295a277cc453", 00:16:34.067 "is_configured": true, 00:16:34.067 "data_offset": 2048, 00:16:34.067 "data_size": 63488 00:16:34.067 } 00:16:34.067 ] 00:16:34.067 }' 00:16:34.067 16:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.067 16:33:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.635 16:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:34.635 16:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:34.895 [2024-07-24 16:33:31.513338] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.833 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.834 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.093 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.093 "name": "raid_bdev1", 00:16:36.093 "uuid": "3668baf2-10db-47e4-894c-3fc245b05e6a", 00:16:36.093 "strip_size_kb": 0, 00:16:36.093 "state": "online", 00:16:36.093 "raid_level": "raid1", 00:16:36.093 "superblock": true, 00:16:36.093 "num_base_bdevs": 2, 00:16:36.093 "num_base_bdevs_discovered": 2, 00:16:36.093 "num_base_bdevs_operational": 2, 00:16:36.093 "base_bdevs_list": [ 00:16:36.093 { 00:16:36.093 "name": "BaseBdev1", 00:16:36.093 "uuid": "87609b9f-ca9d-5825-a58c-460e2d4530b1", 00:16:36.093 "is_configured": true, 00:16:36.093 "data_offset": 2048, 00:16:36.093 "data_size": 63488 00:16:36.093 }, 00:16:36.093 { 00:16:36.093 "name": "BaseBdev2", 00:16:36.093 "uuid": "4a3dd8e8-b2df-5159-b74f-295a277cc453", 00:16:36.093 "is_configured": true, 00:16:36.093 "data_offset": 2048, 00:16:36.093 "data_size": 63488 00:16:36.093 } 00:16:36.093 ] 00:16:36.093 }' 00:16:36.093 16:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.093 16:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.659 16:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:36.917 [2024-07-24 16:33:33.594698] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:36.917 [2024-07-24 16:33:33.594746] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.917 [2024-07-24 16:33:33.597996] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.917 [2024-07-24 16:33:33.598050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.917 [2024-07-24 16:33:33.598155] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.917 [2024-07-24 16:33:33.598178] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:16:36.917 0 00:16:36.917 16:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1622535 00:16:36.917 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1622535 ']' 00:16:36.917 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1622535 00:16:36.917 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1622535 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1622535' 00:16:36.918 killing process with pid 1622535 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1622535 00:16:36.918 [2024-07-24 16:33:33.668065] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.918 16:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1622535 00:16:36.918 [2024-07-24 16:33:33.767169] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:38.823 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.lfWp16msXN 00:16:38.823 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:38.823 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:38.823 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:38.823 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:38.824 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:38.824 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:38.824 16:33:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:38.824 00:16:38.824 real 0m7.722s 00:16:38.824 user 0m10.773s 00:16:38.824 sys 0m1.109s 00:16:38.824 16:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:38.824 16:33:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 ************************************ 00:16:38.824 END TEST raid_read_error_test 00:16:38.824 ************************************ 00:16:38.824 16:33:35 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:38.824 16:33:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:38.824 16:33:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:38.824 16:33:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 ************************************ 00:16:38.824 START TEST raid_write_error_test 00:16:38.824 ************************************ 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.QSLsSG0d2j 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1623953 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1623953 /var/tmp/spdk-raid.sock 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1623953 ']' 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:38.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.824 16:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:39.084 [2024-07-24 16:33:35.767988] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:16:39.084 [2024-07-24 16:33:35.768110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1623953 ] 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:39.084 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:39.084 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:39.343 [2024-07-24 16:33:35.993190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.603 [2024-07-24 16:33:36.254956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.862 [2024-07-24 16:33:36.593470] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:39.862 [2024-07-24 16:33:36.593509] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.121 16:33:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.121 16:33:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:40.121 16:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:40.121 16:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:40.380 BaseBdev1_malloc 00:16:40.381 16:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:40.693 true 00:16:40.693 16:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:40.693 [2024-07-24 16:33:37.462423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:40.693 [2024-07-24 16:33:37.462482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.693 [2024-07-24 16:33:37.462507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:40.693 [2024-07-24 16:33:37.462528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.693 [2024-07-24 16:33:37.465304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.693 [2024-07-24 16:33:37.465342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.693 BaseBdev1 00:16:40.693 16:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:40.693 16:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:40.952 BaseBdev2_malloc 00:16:40.952 16:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:41.211 true 00:16:41.211 16:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:41.211 [2024-07-24 16:33:38.047448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:41.211 [2024-07-24 16:33:38.047505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.211 [2024-07-24 16:33:38.047530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:41.211 [2024-07-24 16:33:38.047551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.211 [2024-07-24 16:33:38.050294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.211 [2024-07-24 16:33:38.050332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:41.211 BaseBdev2 00:16:41.211 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:41.471 [2024-07-24 16:33:38.260091] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.471 [2024-07-24 16:33:38.262468] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.471 [2024-07-24 16:33:38.262719] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040e80 00:16:41.471 [2024-07-24 16:33:38.262742] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:41.471 [2024-07-24 16:33:38.263084] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:41.471 [2024-07-24 16:33:38.263357] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040e80 00:16:41.471 [2024-07-24 16:33:38.263372] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040e80 00:16:41.471 [2024-07-24 16:33:38.263594] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.471 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.731 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.731 "name": "raid_bdev1", 00:16:41.731 "uuid": "67aa2512-39b3-4868-863e-c7035762d235", 00:16:41.731 "strip_size_kb": 0, 00:16:41.731 "state": "online", 00:16:41.731 "raid_level": "raid1", 00:16:41.731 "superblock": true, 00:16:41.731 "num_base_bdevs": 2, 00:16:41.731 "num_base_bdevs_discovered": 2, 00:16:41.731 "num_base_bdevs_operational": 2, 00:16:41.731 "base_bdevs_list": [ 00:16:41.731 { 00:16:41.731 "name": "BaseBdev1", 00:16:41.731 "uuid": "3e9b7f98-a60e-5ee9-9fd2-95ef166d661a", 00:16:41.731 "is_configured": true, 00:16:41.731 "data_offset": 2048, 00:16:41.731 "data_size": 63488 00:16:41.731 }, 00:16:41.731 { 00:16:41.731 "name": "BaseBdev2", 00:16:41.731 "uuid": "a8169ce0-3738-5d0d-9834-5ca8e6fddd7e", 00:16:41.731 "is_configured": true, 00:16:41.731 "data_offset": 2048, 00:16:41.731 "data_size": 63488 00:16:41.731 } 00:16:41.731 ] 00:16:41.731 }' 00:16:41.731 16:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.731 16:33:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.298 16:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:42.298 16:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:42.556 [2024-07-24 16:33:39.172540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:43.494 [2024-07-24 16:33:40.286835] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:43.494 [2024-07-24 16:33:40.286905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:43.494 [2024-07-24 16:33:40.287112] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010710 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.494 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.753 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.753 "name": "raid_bdev1", 00:16:43.753 "uuid": "67aa2512-39b3-4868-863e-c7035762d235", 00:16:43.753 "strip_size_kb": 0, 00:16:43.753 "state": "online", 00:16:43.753 "raid_level": "raid1", 00:16:43.753 "superblock": true, 00:16:43.753 "num_base_bdevs": 2, 00:16:43.753 "num_base_bdevs_discovered": 1, 00:16:43.753 "num_base_bdevs_operational": 1, 00:16:43.753 "base_bdevs_list": [ 00:16:43.753 { 00:16:43.753 "name": null, 00:16:43.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.753 "is_configured": false, 00:16:43.753 "data_offset": 2048, 00:16:43.753 "data_size": 63488 00:16:43.753 }, 00:16:43.753 { 00:16:43.753 "name": "BaseBdev2", 00:16:43.753 "uuid": "a8169ce0-3738-5d0d-9834-5ca8e6fddd7e", 00:16:43.753 "is_configured": true, 00:16:43.753 "data_offset": 2048, 00:16:43.753 "data_size": 63488 00:16:43.753 } 00:16:43.753 ] 00:16:43.753 }' 00:16:43.753 16:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.753 16:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.323 16:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:44.583 [2024-07-24 16:33:41.322299] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:44.583 [2024-07-24 16:33:41.322340] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.583 [2024-07-24 16:33:41.325602] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.583 [2024-07-24 16:33:41.325643] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.583 [2024-07-24 16:33:41.325715] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.583 [2024-07-24 16:33:41.325731] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state offline 00:16:44.583 0 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1623953 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1623953 ']' 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1623953 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1623953 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1623953' 00:16:44.583 killing process with pid 1623953 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1623953 00:16:44.583 [2024-07-24 16:33:41.397523] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.583 16:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1623953 00:16:44.842 [2024-07-24 16:33:41.503344] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.QSLsSG0d2j 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:46.749 00:16:46.749 real 0m7.601s 00:16:46.749 user 0m10.525s 00:16:46.749 sys 0m1.102s 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.749 16:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.749 ************************************ 00:16:46.749 END TEST raid_write_error_test 00:16:46.749 ************************************ 00:16:46.749 16:33:43 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:16:46.749 16:33:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:46.749 16:33:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:46.749 16:33:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:46.749 16:33:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.749 16:33:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.749 ************************************ 00:16:46.749 START TEST raid_state_function_test 00:16:46.749 ************************************ 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:46.749 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1625362 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1625362' 00:16:46.750 Process raid pid: 1625362 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1625362 /var/tmp/spdk-raid.sock 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1625362 ']' 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.750 16:33:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.750 [2024-07-24 16:33:43.441311] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:16:46.750 [2024-07-24 16:33:43.441424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:46.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:46.750 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:47.009 [2024-07-24 16:33:43.669308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.268 [2024-07-24 16:33:43.956994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.527 [2024-07-24 16:33:44.307822] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.527 [2024-07-24 16:33:44.307860] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.787 16:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:47.787 16:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:47.787 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:48.046 [2024-07-24 16:33:44.790118] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.046 [2024-07-24 16:33:44.790184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.046 [2024-07-24 16:33:44.790200] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.046 [2024-07-24 16:33:44.790217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.046 [2024-07-24 16:33:44.790228] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.046 [2024-07-24 16:33:44.790244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.046 16:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.306 16:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.306 "name": "Existed_Raid", 00:16:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.306 "strip_size_kb": 64, 00:16:48.306 "state": "configuring", 00:16:48.306 "raid_level": "raid0", 00:16:48.306 "superblock": false, 00:16:48.306 "num_base_bdevs": 3, 00:16:48.306 "num_base_bdevs_discovered": 0, 00:16:48.306 "num_base_bdevs_operational": 3, 00:16:48.306 "base_bdevs_list": [ 00:16:48.306 { 00:16:48.306 "name": "BaseBdev1", 00:16:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.306 "is_configured": false, 00:16:48.306 "data_offset": 0, 00:16:48.306 "data_size": 0 00:16:48.306 }, 00:16:48.306 { 00:16:48.306 "name": "BaseBdev2", 00:16:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.306 "is_configured": false, 00:16:48.306 "data_offset": 0, 00:16:48.306 "data_size": 0 00:16:48.306 }, 00:16:48.306 { 00:16:48.306 "name": "BaseBdev3", 00:16:48.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.306 "is_configured": false, 00:16:48.306 "data_offset": 0, 00:16:48.306 "data_size": 0 00:16:48.306 } 00:16:48.306 ] 00:16:48.306 }' 00:16:48.306 16:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.306 16:33:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.874 16:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.134 [2024-07-24 16:33:45.808698] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.134 [2024-07-24 16:33:45.808740] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:16:49.134 16:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.393 [2024-07-24 16:33:46.041387] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.393 [2024-07-24 16:33:46.041434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.393 [2024-07-24 16:33:46.041447] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.393 [2024-07-24 16:33:46.041467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.393 [2024-07-24 16:33:46.041478] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.393 [2024-07-24 16:33:46.041494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.393 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.652 [2024-07-24 16:33:46.319185] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.652 BaseBdev1 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:49.652 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.910 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.169 [ 00:16:50.169 { 00:16:50.169 "name": "BaseBdev1", 00:16:50.169 "aliases": [ 00:16:50.169 "64909c8b-4b6f-4a34-82fc-497f15cb2af6" 00:16:50.169 ], 00:16:50.169 "product_name": "Malloc disk", 00:16:50.169 "block_size": 512, 00:16:50.169 "num_blocks": 65536, 00:16:50.169 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:50.169 "assigned_rate_limits": { 00:16:50.169 "rw_ios_per_sec": 0, 00:16:50.169 "rw_mbytes_per_sec": 0, 00:16:50.169 "r_mbytes_per_sec": 0, 00:16:50.169 "w_mbytes_per_sec": 0 00:16:50.169 }, 00:16:50.169 "claimed": true, 00:16:50.169 "claim_type": "exclusive_write", 00:16:50.169 "zoned": false, 00:16:50.169 "supported_io_types": { 00:16:50.169 "read": true, 00:16:50.169 "write": true, 00:16:50.169 "unmap": true, 00:16:50.169 "flush": true, 00:16:50.169 "reset": true, 00:16:50.169 "nvme_admin": false, 00:16:50.169 "nvme_io": false, 00:16:50.169 "nvme_io_md": false, 00:16:50.169 "write_zeroes": true, 00:16:50.169 "zcopy": true, 00:16:50.169 "get_zone_info": false, 00:16:50.169 "zone_management": false, 00:16:50.169 "zone_append": false, 00:16:50.169 "compare": false, 00:16:50.169 "compare_and_write": false, 00:16:50.169 "abort": true, 00:16:50.170 "seek_hole": false, 00:16:50.170 "seek_data": false, 00:16:50.170 "copy": true, 00:16:50.170 "nvme_iov_md": false 00:16:50.170 }, 00:16:50.170 "memory_domains": [ 00:16:50.170 { 00:16:50.170 "dma_device_id": "system", 00:16:50.170 "dma_device_type": 1 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.170 "dma_device_type": 2 00:16:50.170 } 00:16:50.170 ], 00:16:50.170 "driver_specific": {} 00:16:50.170 } 00:16:50.170 ] 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.170 16:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.170 16:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.170 "name": "Existed_Raid", 00:16:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.170 "strip_size_kb": 64, 00:16:50.170 "state": "configuring", 00:16:50.170 "raid_level": "raid0", 00:16:50.170 "superblock": false, 00:16:50.170 "num_base_bdevs": 3, 00:16:50.170 "num_base_bdevs_discovered": 1, 00:16:50.170 "num_base_bdevs_operational": 3, 00:16:50.170 "base_bdevs_list": [ 00:16:50.170 { 00:16:50.170 "name": "BaseBdev1", 00:16:50.170 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:50.170 "is_configured": true, 00:16:50.170 "data_offset": 0, 00:16:50.170 "data_size": 65536 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "name": "BaseBdev2", 00:16:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.170 "is_configured": false, 00:16:50.170 "data_offset": 0, 00:16:50.170 "data_size": 0 00:16:50.170 }, 00:16:50.170 { 00:16:50.170 "name": "BaseBdev3", 00:16:50.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.170 "is_configured": false, 00:16:50.170 "data_offset": 0, 00:16:50.170 "data_size": 0 00:16:50.170 } 00:16:50.170 ] 00:16:50.170 }' 00:16:50.170 16:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.170 16:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.107 16:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.107 [2024-07-24 16:33:47.811365] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.107 [2024-07-24 16:33:47.811419] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:16:51.107 16:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.366 [2024-07-24 16:33:48.040063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.366 [2024-07-24 16:33:48.042341] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.366 [2024-07-24 16:33:48.042383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.366 [2024-07-24 16:33:48.042397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.366 [2024-07-24 16:33:48.042418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.366 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.626 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.626 "name": "Existed_Raid", 00:16:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.626 "strip_size_kb": 64, 00:16:51.626 "state": "configuring", 00:16:51.626 "raid_level": "raid0", 00:16:51.626 "superblock": false, 00:16:51.626 "num_base_bdevs": 3, 00:16:51.626 "num_base_bdevs_discovered": 1, 00:16:51.626 "num_base_bdevs_operational": 3, 00:16:51.626 "base_bdevs_list": [ 00:16:51.626 { 00:16:51.626 "name": "BaseBdev1", 00:16:51.626 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:51.626 "is_configured": true, 00:16:51.626 "data_offset": 0, 00:16:51.626 "data_size": 65536 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "name": "BaseBdev2", 00:16:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.626 "is_configured": false, 00:16:51.626 "data_offset": 0, 00:16:51.626 "data_size": 0 00:16:51.626 }, 00:16:51.626 { 00:16:51.626 "name": "BaseBdev3", 00:16:51.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.626 "is_configured": false, 00:16:51.626 "data_offset": 0, 00:16:51.626 "data_size": 0 00:16:51.626 } 00:16:51.626 ] 00:16:51.626 }' 00:16:51.626 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.626 16:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.195 16:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.454 [2024-07-24 16:33:49.110020] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.454 BaseBdev2 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:52.454 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.713 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.713 [ 00:16:52.713 { 00:16:52.713 "name": "BaseBdev2", 00:16:52.713 "aliases": [ 00:16:52.713 "2da9d634-b01a-46a4-a735-cb5a8e06c415" 00:16:52.713 ], 00:16:52.713 "product_name": "Malloc disk", 00:16:52.713 "block_size": 512, 00:16:52.713 "num_blocks": 65536, 00:16:52.713 "uuid": "2da9d634-b01a-46a4-a735-cb5a8e06c415", 00:16:52.713 "assigned_rate_limits": { 00:16:52.713 "rw_ios_per_sec": 0, 00:16:52.713 "rw_mbytes_per_sec": 0, 00:16:52.713 "r_mbytes_per_sec": 0, 00:16:52.714 "w_mbytes_per_sec": 0 00:16:52.714 }, 00:16:52.714 "claimed": true, 00:16:52.714 "claim_type": "exclusive_write", 00:16:52.714 "zoned": false, 00:16:52.714 "supported_io_types": { 00:16:52.714 "read": true, 00:16:52.714 "write": true, 00:16:52.714 "unmap": true, 00:16:52.714 "flush": true, 00:16:52.714 "reset": true, 00:16:52.714 "nvme_admin": false, 00:16:52.714 "nvme_io": false, 00:16:52.714 "nvme_io_md": false, 00:16:52.714 "write_zeroes": true, 00:16:52.714 "zcopy": true, 00:16:52.714 "get_zone_info": false, 00:16:52.714 "zone_management": false, 00:16:52.714 "zone_append": false, 00:16:52.714 "compare": false, 00:16:52.714 "compare_and_write": false, 00:16:52.714 "abort": true, 00:16:52.714 "seek_hole": false, 00:16:52.714 "seek_data": false, 00:16:52.714 "copy": true, 00:16:52.714 "nvme_iov_md": false 00:16:52.714 }, 00:16:52.714 "memory_domains": [ 00:16:52.714 { 00:16:52.714 "dma_device_id": "system", 00:16:52.714 "dma_device_type": 1 00:16:52.714 }, 00:16:52.714 { 00:16:52.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.714 "dma_device_type": 2 00:16:52.714 } 00:16:52.714 ], 00:16:52.714 "driver_specific": {} 00:16:52.714 } 00:16:52.714 ] 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.973 "name": "Existed_Raid", 00:16:52.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.973 "strip_size_kb": 64, 00:16:52.973 "state": "configuring", 00:16:52.973 "raid_level": "raid0", 00:16:52.973 "superblock": false, 00:16:52.973 "num_base_bdevs": 3, 00:16:52.973 "num_base_bdevs_discovered": 2, 00:16:52.973 "num_base_bdevs_operational": 3, 00:16:52.973 "base_bdevs_list": [ 00:16:52.973 { 00:16:52.973 "name": "BaseBdev1", 00:16:52.973 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:52.973 "is_configured": true, 00:16:52.973 "data_offset": 0, 00:16:52.973 "data_size": 65536 00:16:52.973 }, 00:16:52.973 { 00:16:52.973 "name": "BaseBdev2", 00:16:52.973 "uuid": "2da9d634-b01a-46a4-a735-cb5a8e06c415", 00:16:52.973 "is_configured": true, 00:16:52.973 "data_offset": 0, 00:16:52.973 "data_size": 65536 00:16:52.973 }, 00:16:52.973 { 00:16:52.973 "name": "BaseBdev3", 00:16:52.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.973 "is_configured": false, 00:16:52.973 "data_offset": 0, 00:16:52.973 "data_size": 0 00:16:52.973 } 00:16:52.973 ] 00:16:52.973 }' 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.973 16:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.541 16:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.800 [2024-07-24 16:33:50.653328] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.800 [2024-07-24 16:33:50.653375] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:16:53.800 [2024-07-24 16:33:50.653394] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:53.800 [2024-07-24 16:33:50.653729] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:53.800 [2024-07-24 16:33:50.653965] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:16:53.800 [2024-07-24 16:33:50.653980] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:16:53.800 [2024-07-24 16:33:50.654303] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.800 BaseBdev3 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.060 16:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.392 [ 00:16:54.392 { 00:16:54.392 "name": "BaseBdev3", 00:16:54.392 "aliases": [ 00:16:54.392 "9c872d78-cbc5-47fe-a555-557d5aa73fb3" 00:16:54.392 ], 00:16:54.392 "product_name": "Malloc disk", 00:16:54.392 "block_size": 512, 00:16:54.392 "num_blocks": 65536, 00:16:54.392 "uuid": "9c872d78-cbc5-47fe-a555-557d5aa73fb3", 00:16:54.392 "assigned_rate_limits": { 00:16:54.392 "rw_ios_per_sec": 0, 00:16:54.392 "rw_mbytes_per_sec": 0, 00:16:54.392 "r_mbytes_per_sec": 0, 00:16:54.392 "w_mbytes_per_sec": 0 00:16:54.392 }, 00:16:54.392 "claimed": true, 00:16:54.392 "claim_type": "exclusive_write", 00:16:54.392 "zoned": false, 00:16:54.392 "supported_io_types": { 00:16:54.392 "read": true, 00:16:54.392 "write": true, 00:16:54.392 "unmap": true, 00:16:54.392 "flush": true, 00:16:54.392 "reset": true, 00:16:54.392 "nvme_admin": false, 00:16:54.392 "nvme_io": false, 00:16:54.392 "nvme_io_md": false, 00:16:54.392 "write_zeroes": true, 00:16:54.392 "zcopy": true, 00:16:54.392 "get_zone_info": false, 00:16:54.392 "zone_management": false, 00:16:54.392 "zone_append": false, 00:16:54.392 "compare": false, 00:16:54.392 "compare_and_write": false, 00:16:54.392 "abort": true, 00:16:54.392 "seek_hole": false, 00:16:54.392 "seek_data": false, 00:16:54.392 "copy": true, 00:16:54.392 "nvme_iov_md": false 00:16:54.392 }, 00:16:54.392 "memory_domains": [ 00:16:54.392 { 00:16:54.392 "dma_device_id": "system", 00:16:54.393 "dma_device_type": 1 00:16:54.393 }, 00:16:54.393 { 00:16:54.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.393 "dma_device_type": 2 00:16:54.393 } 00:16:54.393 ], 00:16:54.393 "driver_specific": {} 00:16:54.393 } 00:16:54.393 ] 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.393 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.652 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.652 "name": "Existed_Raid", 00:16:54.652 "uuid": "0737be23-0eff-4fcb-8f10-a90b4f413f69", 00:16:54.652 "strip_size_kb": 64, 00:16:54.652 "state": "online", 00:16:54.652 "raid_level": "raid0", 00:16:54.652 "superblock": false, 00:16:54.652 "num_base_bdevs": 3, 00:16:54.652 "num_base_bdevs_discovered": 3, 00:16:54.652 "num_base_bdevs_operational": 3, 00:16:54.652 "base_bdevs_list": [ 00:16:54.652 { 00:16:54.652 "name": "BaseBdev1", 00:16:54.652 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:54.652 "is_configured": true, 00:16:54.652 "data_offset": 0, 00:16:54.652 "data_size": 65536 00:16:54.652 }, 00:16:54.652 { 00:16:54.652 "name": "BaseBdev2", 00:16:54.652 "uuid": "2da9d634-b01a-46a4-a735-cb5a8e06c415", 00:16:54.652 "is_configured": true, 00:16:54.652 "data_offset": 0, 00:16:54.652 "data_size": 65536 00:16:54.652 }, 00:16:54.652 { 00:16:54.652 "name": "BaseBdev3", 00:16:54.652 "uuid": "9c872d78-cbc5-47fe-a555-557d5aa73fb3", 00:16:54.652 "is_configured": true, 00:16:54.652 "data_offset": 0, 00:16:54.652 "data_size": 65536 00:16:54.652 } 00:16:54.652 ] 00:16:54.652 }' 00:16:54.652 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.652 16:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:55.221 16:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:55.480 [2024-07-24 16:33:52.137787] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.480 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:55.480 "name": "Existed_Raid", 00:16:55.480 "aliases": [ 00:16:55.480 "0737be23-0eff-4fcb-8f10-a90b4f413f69" 00:16:55.480 ], 00:16:55.480 "product_name": "Raid Volume", 00:16:55.480 "block_size": 512, 00:16:55.480 "num_blocks": 196608, 00:16:55.480 "uuid": "0737be23-0eff-4fcb-8f10-a90b4f413f69", 00:16:55.480 "assigned_rate_limits": { 00:16:55.480 "rw_ios_per_sec": 0, 00:16:55.480 "rw_mbytes_per_sec": 0, 00:16:55.480 "r_mbytes_per_sec": 0, 00:16:55.480 "w_mbytes_per_sec": 0 00:16:55.480 }, 00:16:55.480 "claimed": false, 00:16:55.480 "zoned": false, 00:16:55.480 "supported_io_types": { 00:16:55.480 "read": true, 00:16:55.480 "write": true, 00:16:55.480 "unmap": true, 00:16:55.480 "flush": true, 00:16:55.480 "reset": true, 00:16:55.480 "nvme_admin": false, 00:16:55.480 "nvme_io": false, 00:16:55.480 "nvme_io_md": false, 00:16:55.480 "write_zeroes": true, 00:16:55.480 "zcopy": false, 00:16:55.480 "get_zone_info": false, 00:16:55.480 "zone_management": false, 00:16:55.480 "zone_append": false, 00:16:55.480 "compare": false, 00:16:55.480 "compare_and_write": false, 00:16:55.480 "abort": false, 00:16:55.480 "seek_hole": false, 00:16:55.480 "seek_data": false, 00:16:55.480 "copy": false, 00:16:55.480 "nvme_iov_md": false 00:16:55.480 }, 00:16:55.480 "memory_domains": [ 00:16:55.480 { 00:16:55.480 "dma_device_id": "system", 00:16:55.480 "dma_device_type": 1 00:16:55.480 }, 00:16:55.480 { 00:16:55.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.480 "dma_device_type": 2 00:16:55.480 }, 00:16:55.480 { 00:16:55.480 "dma_device_id": "system", 00:16:55.480 "dma_device_type": 1 00:16:55.480 }, 00:16:55.480 { 00:16:55.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.480 "dma_device_type": 2 00:16:55.480 }, 00:16:55.480 { 00:16:55.480 "dma_device_id": "system", 00:16:55.480 "dma_device_type": 1 00:16:55.480 }, 00:16:55.480 { 00:16:55.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.480 "dma_device_type": 2 00:16:55.480 } 00:16:55.480 ], 00:16:55.480 "driver_specific": { 00:16:55.480 "raid": { 00:16:55.480 "uuid": "0737be23-0eff-4fcb-8f10-a90b4f413f69", 00:16:55.480 "strip_size_kb": 64, 00:16:55.481 "state": "online", 00:16:55.481 "raid_level": "raid0", 00:16:55.481 "superblock": false, 00:16:55.481 "num_base_bdevs": 3, 00:16:55.481 "num_base_bdevs_discovered": 3, 00:16:55.481 "num_base_bdevs_operational": 3, 00:16:55.481 "base_bdevs_list": [ 00:16:55.481 { 00:16:55.481 "name": "BaseBdev1", 00:16:55.481 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:55.481 "is_configured": true, 00:16:55.481 "data_offset": 0, 00:16:55.481 "data_size": 65536 00:16:55.481 }, 00:16:55.481 { 00:16:55.481 "name": "BaseBdev2", 00:16:55.481 "uuid": "2da9d634-b01a-46a4-a735-cb5a8e06c415", 00:16:55.481 "is_configured": true, 00:16:55.481 "data_offset": 0, 00:16:55.481 "data_size": 65536 00:16:55.481 }, 00:16:55.481 { 00:16:55.481 "name": "BaseBdev3", 00:16:55.481 "uuid": "9c872d78-cbc5-47fe-a555-557d5aa73fb3", 00:16:55.481 "is_configured": true, 00:16:55.481 "data_offset": 0, 00:16:55.481 "data_size": 65536 00:16:55.481 } 00:16:55.481 ] 00:16:55.481 } 00:16:55.481 } 00:16:55.481 }' 00:16:55.481 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.481 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:55.481 BaseBdev2 00:16:55.481 BaseBdev3' 00:16:55.481 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.481 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:55.481 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.740 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.740 "name": "BaseBdev1", 00:16:55.740 "aliases": [ 00:16:55.740 "64909c8b-4b6f-4a34-82fc-497f15cb2af6" 00:16:55.740 ], 00:16:55.740 "product_name": "Malloc disk", 00:16:55.740 "block_size": 512, 00:16:55.740 "num_blocks": 65536, 00:16:55.740 "uuid": "64909c8b-4b6f-4a34-82fc-497f15cb2af6", 00:16:55.740 "assigned_rate_limits": { 00:16:55.740 "rw_ios_per_sec": 0, 00:16:55.740 "rw_mbytes_per_sec": 0, 00:16:55.740 "r_mbytes_per_sec": 0, 00:16:55.740 "w_mbytes_per_sec": 0 00:16:55.740 }, 00:16:55.740 "claimed": true, 00:16:55.740 "claim_type": "exclusive_write", 00:16:55.740 "zoned": false, 00:16:55.740 "supported_io_types": { 00:16:55.740 "read": true, 00:16:55.740 "write": true, 00:16:55.740 "unmap": true, 00:16:55.740 "flush": true, 00:16:55.740 "reset": true, 00:16:55.740 "nvme_admin": false, 00:16:55.740 "nvme_io": false, 00:16:55.740 "nvme_io_md": false, 00:16:55.740 "write_zeroes": true, 00:16:55.740 "zcopy": true, 00:16:55.740 "get_zone_info": false, 00:16:55.740 "zone_management": false, 00:16:55.740 "zone_append": false, 00:16:55.740 "compare": false, 00:16:55.740 "compare_and_write": false, 00:16:55.740 "abort": true, 00:16:55.740 "seek_hole": false, 00:16:55.740 "seek_data": false, 00:16:55.741 "copy": true, 00:16:55.741 "nvme_iov_md": false 00:16:55.741 }, 00:16:55.741 "memory_domains": [ 00:16:55.741 { 00:16:55.741 "dma_device_id": "system", 00:16:55.741 "dma_device_type": 1 00:16:55.741 }, 00:16:55.741 { 00:16:55.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.741 "dma_device_type": 2 00:16:55.741 } 00:16:55.741 ], 00:16:55.741 "driver_specific": {} 00:16:55.741 }' 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.741 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:56.000 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.259 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.259 "name": "BaseBdev2", 00:16:56.259 "aliases": [ 00:16:56.259 "2da9d634-b01a-46a4-a735-cb5a8e06c415" 00:16:56.259 ], 00:16:56.259 "product_name": "Malloc disk", 00:16:56.259 "block_size": 512, 00:16:56.259 "num_blocks": 65536, 00:16:56.259 "uuid": "2da9d634-b01a-46a4-a735-cb5a8e06c415", 00:16:56.259 "assigned_rate_limits": { 00:16:56.259 "rw_ios_per_sec": 0, 00:16:56.259 "rw_mbytes_per_sec": 0, 00:16:56.259 "r_mbytes_per_sec": 0, 00:16:56.259 "w_mbytes_per_sec": 0 00:16:56.259 }, 00:16:56.259 "claimed": true, 00:16:56.259 "claim_type": "exclusive_write", 00:16:56.259 "zoned": false, 00:16:56.259 "supported_io_types": { 00:16:56.259 "read": true, 00:16:56.259 "write": true, 00:16:56.259 "unmap": true, 00:16:56.259 "flush": true, 00:16:56.259 "reset": true, 00:16:56.259 "nvme_admin": false, 00:16:56.259 "nvme_io": false, 00:16:56.259 "nvme_io_md": false, 00:16:56.259 "write_zeroes": true, 00:16:56.259 "zcopy": true, 00:16:56.259 "get_zone_info": false, 00:16:56.259 "zone_management": false, 00:16:56.259 "zone_append": false, 00:16:56.259 "compare": false, 00:16:56.259 "compare_and_write": false, 00:16:56.259 "abort": true, 00:16:56.259 "seek_hole": false, 00:16:56.259 "seek_data": false, 00:16:56.259 "copy": true, 00:16:56.259 "nvme_iov_md": false 00:16:56.259 }, 00:16:56.259 "memory_domains": [ 00:16:56.259 { 00:16:56.259 "dma_device_id": "system", 00:16:56.259 "dma_device_type": 1 00:16:56.259 }, 00:16:56.259 { 00:16:56.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.259 "dma_device_type": 2 00:16:56.259 } 00:16:56.259 ], 00:16:56.259 "driver_specific": {} 00:16:56.259 }' 00:16:56.259 16:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.259 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.259 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.259 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.259 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:56.518 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.776 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.776 "name": "BaseBdev3", 00:16:56.776 "aliases": [ 00:16:56.776 "9c872d78-cbc5-47fe-a555-557d5aa73fb3" 00:16:56.776 ], 00:16:56.776 "product_name": "Malloc disk", 00:16:56.776 "block_size": 512, 00:16:56.776 "num_blocks": 65536, 00:16:56.776 "uuid": "9c872d78-cbc5-47fe-a555-557d5aa73fb3", 00:16:56.776 "assigned_rate_limits": { 00:16:56.776 "rw_ios_per_sec": 0, 00:16:56.776 "rw_mbytes_per_sec": 0, 00:16:56.776 "r_mbytes_per_sec": 0, 00:16:56.776 "w_mbytes_per_sec": 0 00:16:56.776 }, 00:16:56.776 "claimed": true, 00:16:56.776 "claim_type": "exclusive_write", 00:16:56.776 "zoned": false, 00:16:56.776 "supported_io_types": { 00:16:56.776 "read": true, 00:16:56.776 "write": true, 00:16:56.776 "unmap": true, 00:16:56.776 "flush": true, 00:16:56.776 "reset": true, 00:16:56.776 "nvme_admin": false, 00:16:56.776 "nvme_io": false, 00:16:56.776 "nvme_io_md": false, 00:16:56.776 "write_zeroes": true, 00:16:56.776 "zcopy": true, 00:16:56.776 "get_zone_info": false, 00:16:56.776 "zone_management": false, 00:16:56.776 "zone_append": false, 00:16:56.776 "compare": false, 00:16:56.776 "compare_and_write": false, 00:16:56.776 "abort": true, 00:16:56.776 "seek_hole": false, 00:16:56.776 "seek_data": false, 00:16:56.776 "copy": true, 00:16:56.776 "nvme_iov_md": false 00:16:56.776 }, 00:16:56.776 "memory_domains": [ 00:16:56.776 { 00:16:56.776 "dma_device_id": "system", 00:16:56.776 "dma_device_type": 1 00:16:56.776 }, 00:16:56.776 { 00:16:56.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.776 "dma_device_type": 2 00:16:56.776 } 00:16:56.776 ], 00:16:56.776 "driver_specific": {} 00:16:56.776 }' 00:16:56.776 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.776 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.776 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.776 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.034 16:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:57.293 [2024-07-24 16:33:54.086765] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.293 [2024-07-24 16:33:54.086796] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.293 [2024-07-24 16:33:54.086856] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.293 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.552 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.552 "name": "Existed_Raid", 00:16:57.552 "uuid": "0737be23-0eff-4fcb-8f10-a90b4f413f69", 00:16:57.552 "strip_size_kb": 64, 00:16:57.552 "state": "offline", 00:16:57.552 "raid_level": "raid0", 00:16:57.552 "superblock": false, 00:16:57.552 "num_base_bdevs": 3, 00:16:57.552 "num_base_bdevs_discovered": 2, 00:16:57.552 "num_base_bdevs_operational": 2, 00:16:57.552 "base_bdevs_list": [ 00:16:57.552 { 00:16:57.552 "name": null, 00:16:57.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.552 "is_configured": false, 00:16:57.552 "data_offset": 0, 00:16:57.552 "data_size": 65536 00:16:57.552 }, 00:16:57.552 { 00:16:57.552 "name": "BaseBdev2", 00:16:57.552 "uuid": "2da9d634-b01a-46a4-a735-cb5a8e06c415", 00:16:57.552 "is_configured": true, 00:16:57.552 "data_offset": 0, 00:16:57.552 "data_size": 65536 00:16:57.552 }, 00:16:57.552 { 00:16:57.552 "name": "BaseBdev3", 00:16:57.552 "uuid": "9c872d78-cbc5-47fe-a555-557d5aa73fb3", 00:16:57.552 "is_configured": true, 00:16:57.552 "data_offset": 0, 00:16:57.552 "data_size": 65536 00:16:57.552 } 00:16:57.552 ] 00:16:57.552 }' 00:16:57.552 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.552 16:33:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.120 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:58.120 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.120 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.120 16:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:58.379 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:58.379 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.379 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:58.638 [2024-07-24 16:33:55.389722] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.897 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:58.897 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.897 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.897 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:59.157 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:59.157 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.157 16:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:59.157 [2024-07-24 16:33:55.977626] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.157 [2024-07-24 16:33:55.977682] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:16:59.416 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:59.416 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:59.416 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.416 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.675 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:59.675 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:59.675 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:59.675 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:59.675 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:59.675 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.935 BaseBdev2 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.935 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.194 16:33:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.453 [ 00:17:00.453 { 00:17:00.453 "name": "BaseBdev2", 00:17:00.453 "aliases": [ 00:17:00.453 "4887c481-f8c5-484e-94f9-4f35c8fcff90" 00:17:00.453 ], 00:17:00.453 "product_name": "Malloc disk", 00:17:00.453 "block_size": 512, 00:17:00.453 "num_blocks": 65536, 00:17:00.453 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:00.453 "assigned_rate_limits": { 00:17:00.453 "rw_ios_per_sec": 0, 00:17:00.453 "rw_mbytes_per_sec": 0, 00:17:00.453 "r_mbytes_per_sec": 0, 00:17:00.453 "w_mbytes_per_sec": 0 00:17:00.453 }, 00:17:00.453 "claimed": false, 00:17:00.453 "zoned": false, 00:17:00.453 "supported_io_types": { 00:17:00.453 "read": true, 00:17:00.453 "write": true, 00:17:00.453 "unmap": true, 00:17:00.453 "flush": true, 00:17:00.453 "reset": true, 00:17:00.453 "nvme_admin": false, 00:17:00.453 "nvme_io": false, 00:17:00.453 "nvme_io_md": false, 00:17:00.453 "write_zeroes": true, 00:17:00.453 "zcopy": true, 00:17:00.453 "get_zone_info": false, 00:17:00.453 "zone_management": false, 00:17:00.453 "zone_append": false, 00:17:00.453 "compare": false, 00:17:00.453 "compare_and_write": false, 00:17:00.453 "abort": true, 00:17:00.453 "seek_hole": false, 00:17:00.453 "seek_data": false, 00:17:00.453 "copy": true, 00:17:00.453 "nvme_iov_md": false 00:17:00.453 }, 00:17:00.453 "memory_domains": [ 00:17:00.453 { 00:17:00.453 "dma_device_id": "system", 00:17:00.453 "dma_device_type": 1 00:17:00.453 }, 00:17:00.453 { 00:17:00.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.454 "dma_device_type": 2 00:17:00.454 } 00:17:00.454 ], 00:17:00.454 "driver_specific": {} 00:17:00.454 } 00:17:00.454 ] 00:17:00.454 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:00.454 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:00.454 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:00.454 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:00.712 BaseBdev3 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:00.712 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.972 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.972 [ 00:17:00.972 { 00:17:00.972 "name": "BaseBdev3", 00:17:00.972 "aliases": [ 00:17:00.972 "5c626306-44e2-4d9a-a84a-40f8655bab50" 00:17:00.972 ], 00:17:00.972 "product_name": "Malloc disk", 00:17:00.972 "block_size": 512, 00:17:00.972 "num_blocks": 65536, 00:17:00.972 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:00.972 "assigned_rate_limits": { 00:17:00.972 "rw_ios_per_sec": 0, 00:17:00.972 "rw_mbytes_per_sec": 0, 00:17:00.972 "r_mbytes_per_sec": 0, 00:17:00.972 "w_mbytes_per_sec": 0 00:17:00.972 }, 00:17:00.972 "claimed": false, 00:17:00.972 "zoned": false, 00:17:00.972 "supported_io_types": { 00:17:00.972 "read": true, 00:17:00.972 "write": true, 00:17:00.972 "unmap": true, 00:17:00.972 "flush": true, 00:17:00.972 "reset": true, 00:17:00.972 "nvme_admin": false, 00:17:00.972 "nvme_io": false, 00:17:00.972 "nvme_io_md": false, 00:17:00.972 "write_zeroes": true, 00:17:00.972 "zcopy": true, 00:17:00.972 "get_zone_info": false, 00:17:00.972 "zone_management": false, 00:17:00.972 "zone_append": false, 00:17:00.972 "compare": false, 00:17:00.972 "compare_and_write": false, 00:17:00.972 "abort": true, 00:17:00.972 "seek_hole": false, 00:17:00.972 "seek_data": false, 00:17:00.972 "copy": true, 00:17:00.972 "nvme_iov_md": false 00:17:00.972 }, 00:17:00.972 "memory_domains": [ 00:17:00.972 { 00:17:00.972 "dma_device_id": "system", 00:17:00.972 "dma_device_type": 1 00:17:00.972 }, 00:17:00.972 { 00:17:00.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.972 "dma_device_type": 2 00:17:00.972 } 00:17:00.972 ], 00:17:00.972 "driver_specific": {} 00:17:00.972 } 00:17:00.972 ] 00:17:00.972 16:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:00.972 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:00.972 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:00.972 16:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:01.231 [2024-07-24 16:33:58.029202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.231 [2024-07-24 16:33:58.029251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.232 [2024-07-24 16:33:58.029282] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.232 [2024-07-24 16:33:58.031564] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.232 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.491 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.491 "name": "Existed_Raid", 00:17:01.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.491 "strip_size_kb": 64, 00:17:01.491 "state": "configuring", 00:17:01.491 "raid_level": "raid0", 00:17:01.491 "superblock": false, 00:17:01.491 "num_base_bdevs": 3, 00:17:01.491 "num_base_bdevs_discovered": 2, 00:17:01.491 "num_base_bdevs_operational": 3, 00:17:01.491 "base_bdevs_list": [ 00:17:01.491 { 00:17:01.491 "name": "BaseBdev1", 00:17:01.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.491 "is_configured": false, 00:17:01.491 "data_offset": 0, 00:17:01.491 "data_size": 0 00:17:01.491 }, 00:17:01.491 { 00:17:01.491 "name": "BaseBdev2", 00:17:01.491 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:01.491 "is_configured": true, 00:17:01.491 "data_offset": 0, 00:17:01.491 "data_size": 65536 00:17:01.491 }, 00:17:01.491 { 00:17:01.491 "name": "BaseBdev3", 00:17:01.491 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:01.491 "is_configured": true, 00:17:01.491 "data_offset": 0, 00:17:01.491 "data_size": 65536 00:17:01.491 } 00:17:01.491 ] 00:17:01.491 }' 00:17:01.491 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.491 16:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.060 16:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:02.319 [2024-07-24 16:33:59.039905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.319 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.578 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.578 "name": "Existed_Raid", 00:17:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.578 "strip_size_kb": 64, 00:17:02.578 "state": "configuring", 00:17:02.578 "raid_level": "raid0", 00:17:02.578 "superblock": false, 00:17:02.578 "num_base_bdevs": 3, 00:17:02.578 "num_base_bdevs_discovered": 1, 00:17:02.578 "num_base_bdevs_operational": 3, 00:17:02.578 "base_bdevs_list": [ 00:17:02.578 { 00:17:02.578 "name": "BaseBdev1", 00:17:02.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.578 "is_configured": false, 00:17:02.578 "data_offset": 0, 00:17:02.578 "data_size": 0 00:17:02.578 }, 00:17:02.578 { 00:17:02.578 "name": null, 00:17:02.578 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:02.578 "is_configured": false, 00:17:02.578 "data_offset": 0, 00:17:02.578 "data_size": 65536 00:17:02.578 }, 00:17:02.578 { 00:17:02.578 "name": "BaseBdev3", 00:17:02.578 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:02.578 "is_configured": true, 00:17:02.578 "data_offset": 0, 00:17:02.578 "data_size": 65536 00:17:02.578 } 00:17:02.578 ] 00:17:02.578 }' 00:17:02.578 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.578 16:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.146 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.146 16:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:03.404 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:03.404 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:03.662 [2024-07-24 16:34:00.293951] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.662 BaseBdev1 00:17:03.662 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:03.662 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:03.662 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.662 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:03.663 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.663 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.663 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:03.921 [ 00:17:03.921 { 00:17:03.921 "name": "BaseBdev1", 00:17:03.921 "aliases": [ 00:17:03.921 "120ee391-2f87-41d6-9fb6-8b8d2b8a0879" 00:17:03.921 ], 00:17:03.921 "product_name": "Malloc disk", 00:17:03.921 "block_size": 512, 00:17:03.921 "num_blocks": 65536, 00:17:03.921 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:03.921 "assigned_rate_limits": { 00:17:03.921 "rw_ios_per_sec": 0, 00:17:03.921 "rw_mbytes_per_sec": 0, 00:17:03.921 "r_mbytes_per_sec": 0, 00:17:03.921 "w_mbytes_per_sec": 0 00:17:03.921 }, 00:17:03.921 "claimed": true, 00:17:03.921 "claim_type": "exclusive_write", 00:17:03.921 "zoned": false, 00:17:03.921 "supported_io_types": { 00:17:03.921 "read": true, 00:17:03.921 "write": true, 00:17:03.921 "unmap": true, 00:17:03.921 "flush": true, 00:17:03.921 "reset": true, 00:17:03.921 "nvme_admin": false, 00:17:03.921 "nvme_io": false, 00:17:03.921 "nvme_io_md": false, 00:17:03.921 "write_zeroes": true, 00:17:03.921 "zcopy": true, 00:17:03.921 "get_zone_info": false, 00:17:03.921 "zone_management": false, 00:17:03.921 "zone_append": false, 00:17:03.921 "compare": false, 00:17:03.921 "compare_and_write": false, 00:17:03.921 "abort": true, 00:17:03.921 "seek_hole": false, 00:17:03.921 "seek_data": false, 00:17:03.921 "copy": true, 00:17:03.921 "nvme_iov_md": false 00:17:03.921 }, 00:17:03.921 "memory_domains": [ 00:17:03.921 { 00:17:03.921 "dma_device_id": "system", 00:17:03.921 "dma_device_type": 1 00:17:03.921 }, 00:17:03.921 { 00:17:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.921 "dma_device_type": 2 00:17:03.921 } 00:17:03.921 ], 00:17:03.921 "driver_specific": {} 00:17:03.921 } 00:17:03.921 ] 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.921 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.180 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.180 "name": "Existed_Raid", 00:17:04.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.180 "strip_size_kb": 64, 00:17:04.180 "state": "configuring", 00:17:04.180 "raid_level": "raid0", 00:17:04.180 "superblock": false, 00:17:04.180 "num_base_bdevs": 3, 00:17:04.180 "num_base_bdevs_discovered": 2, 00:17:04.180 "num_base_bdevs_operational": 3, 00:17:04.180 "base_bdevs_list": [ 00:17:04.180 { 00:17:04.180 "name": "BaseBdev1", 00:17:04.180 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:04.180 "is_configured": true, 00:17:04.180 "data_offset": 0, 00:17:04.180 "data_size": 65536 00:17:04.180 }, 00:17:04.180 { 00:17:04.180 "name": null, 00:17:04.180 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:04.180 "is_configured": false, 00:17:04.180 "data_offset": 0, 00:17:04.180 "data_size": 65536 00:17:04.180 }, 00:17:04.180 { 00:17:04.180 "name": "BaseBdev3", 00:17:04.180 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:04.180 "is_configured": true, 00:17:04.180 "data_offset": 0, 00:17:04.180 "data_size": 65536 00:17:04.180 } 00:17:04.180 ] 00:17:04.180 }' 00:17:04.180 16:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.180 16:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.745 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.745 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:05.003 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:05.003 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:05.262 [2024-07-24 16:34:01.974567] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.262 16:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.520 16:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.520 "name": "Existed_Raid", 00:17:05.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.520 "strip_size_kb": 64, 00:17:05.520 "state": "configuring", 00:17:05.520 "raid_level": "raid0", 00:17:05.520 "superblock": false, 00:17:05.520 "num_base_bdevs": 3, 00:17:05.520 "num_base_bdevs_discovered": 1, 00:17:05.520 "num_base_bdevs_operational": 3, 00:17:05.520 "base_bdevs_list": [ 00:17:05.520 { 00:17:05.520 "name": "BaseBdev1", 00:17:05.520 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:05.520 "is_configured": true, 00:17:05.520 "data_offset": 0, 00:17:05.520 "data_size": 65536 00:17:05.520 }, 00:17:05.520 { 00:17:05.520 "name": null, 00:17:05.520 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:05.520 "is_configured": false, 00:17:05.520 "data_offset": 0, 00:17:05.520 "data_size": 65536 00:17:05.520 }, 00:17:05.520 { 00:17:05.520 "name": null, 00:17:05.520 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:05.520 "is_configured": false, 00:17:05.520 "data_offset": 0, 00:17:05.520 "data_size": 65536 00:17:05.520 } 00:17:05.520 ] 00:17:05.520 }' 00:17:05.520 16:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.520 16:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.084 16:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.084 16:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:06.341 16:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:06.341 16:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:06.609 [2024-07-24 16:34:03.205932] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.609 "name": "Existed_Raid", 00:17:06.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.609 "strip_size_kb": 64, 00:17:06.609 "state": "configuring", 00:17:06.609 "raid_level": "raid0", 00:17:06.609 "superblock": false, 00:17:06.609 "num_base_bdevs": 3, 00:17:06.609 "num_base_bdevs_discovered": 2, 00:17:06.609 "num_base_bdevs_operational": 3, 00:17:06.609 "base_bdevs_list": [ 00:17:06.609 { 00:17:06.609 "name": "BaseBdev1", 00:17:06.609 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:06.609 "is_configured": true, 00:17:06.609 "data_offset": 0, 00:17:06.609 "data_size": 65536 00:17:06.609 }, 00:17:06.609 { 00:17:06.609 "name": null, 00:17:06.609 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:06.609 "is_configured": false, 00:17:06.609 "data_offset": 0, 00:17:06.609 "data_size": 65536 00:17:06.609 }, 00:17:06.609 { 00:17:06.609 "name": "BaseBdev3", 00:17:06.609 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:06.609 "is_configured": true, 00:17:06.609 "data_offset": 0, 00:17:06.609 "data_size": 65536 00:17:06.609 } 00:17:06.609 ] 00:17:06.609 }' 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.609 16:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.198 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.198 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.476 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:07.476 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:07.734 [2024-07-24 16:34:04.481416] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.991 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.992 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.992 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.992 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.992 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.250 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.250 "name": "Existed_Raid", 00:17:08.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.250 "strip_size_kb": 64, 00:17:08.250 "state": "configuring", 00:17:08.250 "raid_level": "raid0", 00:17:08.250 "superblock": false, 00:17:08.250 "num_base_bdevs": 3, 00:17:08.250 "num_base_bdevs_discovered": 1, 00:17:08.250 "num_base_bdevs_operational": 3, 00:17:08.250 "base_bdevs_list": [ 00:17:08.250 { 00:17:08.250 "name": null, 00:17:08.250 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:08.250 "is_configured": false, 00:17:08.250 "data_offset": 0, 00:17:08.250 "data_size": 65536 00:17:08.250 }, 00:17:08.250 { 00:17:08.250 "name": null, 00:17:08.250 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:08.250 "is_configured": false, 00:17:08.250 "data_offset": 0, 00:17:08.250 "data_size": 65536 00:17:08.250 }, 00:17:08.250 { 00:17:08.250 "name": "BaseBdev3", 00:17:08.250 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:08.250 "is_configured": true, 00:17:08.250 "data_offset": 0, 00:17:08.250 "data_size": 65536 00:17:08.250 } 00:17:08.250 ] 00:17:08.250 }' 00:17:08.250 16:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.250 16:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.815 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.815 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.815 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:08.815 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:09.072 [2024-07-24 16:34:05.875090] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.072 16:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.330 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.330 "name": "Existed_Raid", 00:17:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.330 "strip_size_kb": 64, 00:17:09.330 "state": "configuring", 00:17:09.330 "raid_level": "raid0", 00:17:09.330 "superblock": false, 00:17:09.330 "num_base_bdevs": 3, 00:17:09.330 "num_base_bdevs_discovered": 2, 00:17:09.330 "num_base_bdevs_operational": 3, 00:17:09.330 "base_bdevs_list": [ 00:17:09.330 { 00:17:09.330 "name": null, 00:17:09.330 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:09.330 "is_configured": false, 00:17:09.330 "data_offset": 0, 00:17:09.330 "data_size": 65536 00:17:09.330 }, 00:17:09.330 { 00:17:09.330 "name": "BaseBdev2", 00:17:09.330 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:09.330 "is_configured": true, 00:17:09.330 "data_offset": 0, 00:17:09.330 "data_size": 65536 00:17:09.330 }, 00:17:09.330 { 00:17:09.330 "name": "BaseBdev3", 00:17:09.330 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:09.330 "is_configured": true, 00:17:09.330 "data_offset": 0, 00:17:09.330 "data_size": 65536 00:17:09.330 } 00:17:09.330 ] 00:17:09.330 }' 00:17:09.330 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.330 16:34:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.895 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:09.895 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.152 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:10.152 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.152 16:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:10.410 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 120ee391-2f87-41d6-9fb6-8b8d2b8a0879 00:17:10.667 [2024-07-24 16:34:07.419660] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:10.667 [2024-07-24 16:34:07.419702] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:17:10.667 [2024-07-24 16:34:07.419718] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:10.667 [2024-07-24 16:34:07.420021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:10.667 [2024-07-24 16:34:07.420229] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:17:10.667 [2024-07-24 16:34:07.420244] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:17:10.667 [2024-07-24 16:34:07.420552] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.667 NewBaseBdev 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.667 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.931 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:11.191 [ 00:17:11.191 { 00:17:11.191 "name": "NewBaseBdev", 00:17:11.191 "aliases": [ 00:17:11.191 "120ee391-2f87-41d6-9fb6-8b8d2b8a0879" 00:17:11.191 ], 00:17:11.191 "product_name": "Malloc disk", 00:17:11.191 "block_size": 512, 00:17:11.191 "num_blocks": 65536, 00:17:11.191 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:11.191 "assigned_rate_limits": { 00:17:11.191 "rw_ios_per_sec": 0, 00:17:11.191 "rw_mbytes_per_sec": 0, 00:17:11.191 "r_mbytes_per_sec": 0, 00:17:11.191 "w_mbytes_per_sec": 0 00:17:11.191 }, 00:17:11.191 "claimed": true, 00:17:11.191 "claim_type": "exclusive_write", 00:17:11.191 "zoned": false, 00:17:11.191 "supported_io_types": { 00:17:11.191 "read": true, 00:17:11.191 "write": true, 00:17:11.191 "unmap": true, 00:17:11.191 "flush": true, 00:17:11.191 "reset": true, 00:17:11.191 "nvme_admin": false, 00:17:11.191 "nvme_io": false, 00:17:11.191 "nvme_io_md": false, 00:17:11.191 "write_zeroes": true, 00:17:11.191 "zcopy": true, 00:17:11.191 "get_zone_info": false, 00:17:11.191 "zone_management": false, 00:17:11.191 "zone_append": false, 00:17:11.191 "compare": false, 00:17:11.191 "compare_and_write": false, 00:17:11.191 "abort": true, 00:17:11.191 "seek_hole": false, 00:17:11.191 "seek_data": false, 00:17:11.191 "copy": true, 00:17:11.191 "nvme_iov_md": false 00:17:11.191 }, 00:17:11.191 "memory_domains": [ 00:17:11.191 { 00:17:11.191 "dma_device_id": "system", 00:17:11.191 "dma_device_type": 1 00:17:11.191 }, 00:17:11.191 { 00:17:11.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.191 "dma_device_type": 2 00:17:11.191 } 00:17:11.191 ], 00:17:11.191 "driver_specific": {} 00:17:11.191 } 00:17:11.191 ] 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.191 16:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.449 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.450 "name": "Existed_Raid", 00:17:11.450 "uuid": "06057f38-fe54-4883-b4ce-3ca03987dca9", 00:17:11.450 "strip_size_kb": 64, 00:17:11.450 "state": "online", 00:17:11.450 "raid_level": "raid0", 00:17:11.450 "superblock": false, 00:17:11.450 "num_base_bdevs": 3, 00:17:11.450 "num_base_bdevs_discovered": 3, 00:17:11.450 "num_base_bdevs_operational": 3, 00:17:11.450 "base_bdevs_list": [ 00:17:11.450 { 00:17:11.450 "name": "NewBaseBdev", 00:17:11.450 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:11.450 "is_configured": true, 00:17:11.450 "data_offset": 0, 00:17:11.450 "data_size": 65536 00:17:11.450 }, 00:17:11.450 { 00:17:11.450 "name": "BaseBdev2", 00:17:11.450 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:11.450 "is_configured": true, 00:17:11.450 "data_offset": 0, 00:17:11.450 "data_size": 65536 00:17:11.450 }, 00:17:11.450 { 00:17:11.450 "name": "BaseBdev3", 00:17:11.450 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:11.450 "is_configured": true, 00:17:11.450 "data_offset": 0, 00:17:11.450 "data_size": 65536 00:17:11.450 } 00:17:11.450 ] 00:17:11.450 }' 00:17:11.450 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.450 16:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:12.017 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:12.277 [2024-07-24 16:34:08.908124] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.277 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:12.277 "name": "Existed_Raid", 00:17:12.277 "aliases": [ 00:17:12.277 "06057f38-fe54-4883-b4ce-3ca03987dca9" 00:17:12.277 ], 00:17:12.277 "product_name": "Raid Volume", 00:17:12.277 "block_size": 512, 00:17:12.277 "num_blocks": 196608, 00:17:12.277 "uuid": "06057f38-fe54-4883-b4ce-3ca03987dca9", 00:17:12.277 "assigned_rate_limits": { 00:17:12.277 "rw_ios_per_sec": 0, 00:17:12.277 "rw_mbytes_per_sec": 0, 00:17:12.277 "r_mbytes_per_sec": 0, 00:17:12.277 "w_mbytes_per_sec": 0 00:17:12.277 }, 00:17:12.277 "claimed": false, 00:17:12.277 "zoned": false, 00:17:12.277 "supported_io_types": { 00:17:12.277 "read": true, 00:17:12.277 "write": true, 00:17:12.277 "unmap": true, 00:17:12.277 "flush": true, 00:17:12.277 "reset": true, 00:17:12.277 "nvme_admin": false, 00:17:12.277 "nvme_io": false, 00:17:12.277 "nvme_io_md": false, 00:17:12.277 "write_zeroes": true, 00:17:12.277 "zcopy": false, 00:17:12.277 "get_zone_info": false, 00:17:12.277 "zone_management": false, 00:17:12.277 "zone_append": false, 00:17:12.277 "compare": false, 00:17:12.277 "compare_and_write": false, 00:17:12.277 "abort": false, 00:17:12.277 "seek_hole": false, 00:17:12.277 "seek_data": false, 00:17:12.277 "copy": false, 00:17:12.277 "nvme_iov_md": false 00:17:12.277 }, 00:17:12.277 "memory_domains": [ 00:17:12.277 { 00:17:12.277 "dma_device_id": "system", 00:17:12.277 "dma_device_type": 1 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.277 "dma_device_type": 2 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "dma_device_id": "system", 00:17:12.277 "dma_device_type": 1 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.277 "dma_device_type": 2 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "dma_device_id": "system", 00:17:12.277 "dma_device_type": 1 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.277 "dma_device_type": 2 00:17:12.277 } 00:17:12.277 ], 00:17:12.277 "driver_specific": { 00:17:12.277 "raid": { 00:17:12.277 "uuid": "06057f38-fe54-4883-b4ce-3ca03987dca9", 00:17:12.277 "strip_size_kb": 64, 00:17:12.277 "state": "online", 00:17:12.277 "raid_level": "raid0", 00:17:12.277 "superblock": false, 00:17:12.277 "num_base_bdevs": 3, 00:17:12.277 "num_base_bdevs_discovered": 3, 00:17:12.277 "num_base_bdevs_operational": 3, 00:17:12.277 "base_bdevs_list": [ 00:17:12.277 { 00:17:12.277 "name": "NewBaseBdev", 00:17:12.277 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:12.277 "is_configured": true, 00:17:12.277 "data_offset": 0, 00:17:12.277 "data_size": 65536 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "name": "BaseBdev2", 00:17:12.277 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:12.277 "is_configured": true, 00:17:12.277 "data_offset": 0, 00:17:12.277 "data_size": 65536 00:17:12.277 }, 00:17:12.277 { 00:17:12.277 "name": "BaseBdev3", 00:17:12.277 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:12.277 "is_configured": true, 00:17:12.277 "data_offset": 0, 00:17:12.277 "data_size": 65536 00:17:12.277 } 00:17:12.277 ] 00:17:12.277 } 00:17:12.277 } 00:17:12.277 }' 00:17:12.277 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.277 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:12.277 BaseBdev2 00:17:12.277 BaseBdev3' 00:17:12.277 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.277 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:12.277 16:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.536 "name": "NewBaseBdev", 00:17:12.536 "aliases": [ 00:17:12.536 "120ee391-2f87-41d6-9fb6-8b8d2b8a0879" 00:17:12.536 ], 00:17:12.536 "product_name": "Malloc disk", 00:17:12.536 "block_size": 512, 00:17:12.536 "num_blocks": 65536, 00:17:12.536 "uuid": "120ee391-2f87-41d6-9fb6-8b8d2b8a0879", 00:17:12.536 "assigned_rate_limits": { 00:17:12.536 "rw_ios_per_sec": 0, 00:17:12.536 "rw_mbytes_per_sec": 0, 00:17:12.536 "r_mbytes_per_sec": 0, 00:17:12.536 "w_mbytes_per_sec": 0 00:17:12.536 }, 00:17:12.536 "claimed": true, 00:17:12.536 "claim_type": "exclusive_write", 00:17:12.536 "zoned": false, 00:17:12.536 "supported_io_types": { 00:17:12.536 "read": true, 00:17:12.536 "write": true, 00:17:12.536 "unmap": true, 00:17:12.536 "flush": true, 00:17:12.536 "reset": true, 00:17:12.536 "nvme_admin": false, 00:17:12.536 "nvme_io": false, 00:17:12.536 "nvme_io_md": false, 00:17:12.536 "write_zeroes": true, 00:17:12.536 "zcopy": true, 00:17:12.536 "get_zone_info": false, 00:17:12.536 "zone_management": false, 00:17:12.536 "zone_append": false, 00:17:12.536 "compare": false, 00:17:12.536 "compare_and_write": false, 00:17:12.536 "abort": true, 00:17:12.536 "seek_hole": false, 00:17:12.536 "seek_data": false, 00:17:12.536 "copy": true, 00:17:12.536 "nvme_iov_md": false 00:17:12.536 }, 00:17:12.536 "memory_domains": [ 00:17:12.536 { 00:17:12.536 "dma_device_id": "system", 00:17:12.536 "dma_device_type": 1 00:17:12.536 }, 00:17:12.536 { 00:17:12.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.536 "dma_device_type": 2 00:17:12.536 } 00:17:12.536 ], 00:17:12.536 "driver_specific": {} 00:17:12.536 }' 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.536 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:12.794 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.053 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.053 "name": "BaseBdev2", 00:17:13.053 "aliases": [ 00:17:13.053 "4887c481-f8c5-484e-94f9-4f35c8fcff90" 00:17:13.053 ], 00:17:13.053 "product_name": "Malloc disk", 00:17:13.053 "block_size": 512, 00:17:13.053 "num_blocks": 65536, 00:17:13.053 "uuid": "4887c481-f8c5-484e-94f9-4f35c8fcff90", 00:17:13.053 "assigned_rate_limits": { 00:17:13.053 "rw_ios_per_sec": 0, 00:17:13.053 "rw_mbytes_per_sec": 0, 00:17:13.053 "r_mbytes_per_sec": 0, 00:17:13.053 "w_mbytes_per_sec": 0 00:17:13.053 }, 00:17:13.053 "claimed": true, 00:17:13.053 "claim_type": "exclusive_write", 00:17:13.053 "zoned": false, 00:17:13.053 "supported_io_types": { 00:17:13.053 "read": true, 00:17:13.053 "write": true, 00:17:13.053 "unmap": true, 00:17:13.053 "flush": true, 00:17:13.053 "reset": true, 00:17:13.053 "nvme_admin": false, 00:17:13.053 "nvme_io": false, 00:17:13.053 "nvme_io_md": false, 00:17:13.053 "write_zeroes": true, 00:17:13.053 "zcopy": true, 00:17:13.053 "get_zone_info": false, 00:17:13.053 "zone_management": false, 00:17:13.053 "zone_append": false, 00:17:13.053 "compare": false, 00:17:13.053 "compare_and_write": false, 00:17:13.053 "abort": true, 00:17:13.053 "seek_hole": false, 00:17:13.053 "seek_data": false, 00:17:13.053 "copy": true, 00:17:13.053 "nvme_iov_md": false 00:17:13.053 }, 00:17:13.053 "memory_domains": [ 00:17:13.053 { 00:17:13.053 "dma_device_id": "system", 00:17:13.053 "dma_device_type": 1 00:17:13.053 }, 00:17:13.053 { 00:17:13.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.053 "dma_device_type": 2 00:17:13.053 } 00:17:13.053 ], 00:17:13.053 "driver_specific": {} 00:17:13.053 }' 00:17:13.053 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.053 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.053 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.053 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.053 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.312 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.312 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.312 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.312 16:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.312 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.312 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.312 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.312 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.312 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:13.312 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.570 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.570 "name": "BaseBdev3", 00:17:13.570 "aliases": [ 00:17:13.570 "5c626306-44e2-4d9a-a84a-40f8655bab50" 00:17:13.570 ], 00:17:13.570 "product_name": "Malloc disk", 00:17:13.570 "block_size": 512, 00:17:13.570 "num_blocks": 65536, 00:17:13.570 "uuid": "5c626306-44e2-4d9a-a84a-40f8655bab50", 00:17:13.570 "assigned_rate_limits": { 00:17:13.570 "rw_ios_per_sec": 0, 00:17:13.570 "rw_mbytes_per_sec": 0, 00:17:13.570 "r_mbytes_per_sec": 0, 00:17:13.570 "w_mbytes_per_sec": 0 00:17:13.570 }, 00:17:13.570 "claimed": true, 00:17:13.570 "claim_type": "exclusive_write", 00:17:13.570 "zoned": false, 00:17:13.570 "supported_io_types": { 00:17:13.570 "read": true, 00:17:13.570 "write": true, 00:17:13.570 "unmap": true, 00:17:13.570 "flush": true, 00:17:13.570 "reset": true, 00:17:13.570 "nvme_admin": false, 00:17:13.570 "nvme_io": false, 00:17:13.570 "nvme_io_md": false, 00:17:13.570 "write_zeroes": true, 00:17:13.570 "zcopy": true, 00:17:13.570 "get_zone_info": false, 00:17:13.570 "zone_management": false, 00:17:13.570 "zone_append": false, 00:17:13.570 "compare": false, 00:17:13.570 "compare_and_write": false, 00:17:13.570 "abort": true, 00:17:13.570 "seek_hole": false, 00:17:13.570 "seek_data": false, 00:17:13.570 "copy": true, 00:17:13.570 "nvme_iov_md": false 00:17:13.570 }, 00:17:13.570 "memory_domains": [ 00:17:13.570 { 00:17:13.570 "dma_device_id": "system", 00:17:13.570 "dma_device_type": 1 00:17:13.570 }, 00:17:13.570 { 00:17:13.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.570 "dma_device_type": 2 00:17:13.570 } 00:17:13.570 ], 00:17:13.570 "driver_specific": {} 00:17:13.570 }' 00:17:13.570 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.570 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.570 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.570 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.828 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:14.086 [2024-07-24 16:34:10.881146] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.086 [2024-07-24 16:34:10.881181] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.086 [2024-07-24 16:34:10.881263] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.086 [2024-07-24 16:34:10.881326] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.086 [2024-07-24 16:34:10.881349] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:17:14.086 16:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1625362 00:17:14.086 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1625362 ']' 00:17:14.086 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1625362 00:17:14.086 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:14.086 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.086 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625362 00:17:14.345 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.345 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.345 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625362' 00:17:14.345 killing process with pid 1625362 00:17:14.345 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1625362 00:17:14.345 [2024-07-24 16:34:10.959051] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.345 16:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1625362 00:17:14.604 [2024-07-24 16:34:11.262913] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:16.511 00:17:16.511 real 0m29.543s 00:17:16.511 user 0m51.998s 00:17:16.511 sys 0m5.024s 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.511 ************************************ 00:17:16.511 END TEST raid_state_function_test 00:17:16.511 ************************************ 00:17:16.511 16:34:12 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:16.511 16:34:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:16.511 16:34:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.511 16:34:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:16.511 ************************************ 00:17:16.511 START TEST raid_state_function_test_sb 00:17:16.511 ************************************ 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1630970 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1630970' 00:17:16.511 Process raid pid: 1630970 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1630970 /var/tmp/spdk-raid.sock 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1630970 ']' 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:16.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.511 16:34:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.511 [2024-07-24 16:34:13.073026] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:17:16.511 [2024-07-24 16:34:13.073146] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:16.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.511 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:16.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:16.512 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:16.512 [2024-07-24 16:34:13.297978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.772 [2024-07-24 16:34:13.570453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.339 [2024-07-24 16:34:13.910019] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.339 [2024-07-24 16:34:13.910056] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.339 16:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.339 16:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:17.339 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:17.598 [2024-07-24 16:34:14.305770] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.598 [2024-07-24 16:34:14.305825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.598 [2024-07-24 16:34:14.305840] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.598 [2024-07-24 16:34:14.305858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.598 [2024-07-24 16:34:14.305870] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:17.598 [2024-07-24 16:34:14.305885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:17.598 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.599 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.599 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.599 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.599 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.599 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.857 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.857 "name": "Existed_Raid", 00:17:17.857 "uuid": "b72da6eb-453e-4e7b-adfb-c5dbd0b2cfda", 00:17:17.857 "strip_size_kb": 64, 00:17:17.857 "state": "configuring", 00:17:17.857 "raid_level": "raid0", 00:17:17.857 "superblock": true, 00:17:17.857 "num_base_bdevs": 3, 00:17:17.857 "num_base_bdevs_discovered": 0, 00:17:17.857 "num_base_bdevs_operational": 3, 00:17:17.857 "base_bdevs_list": [ 00:17:17.857 { 00:17:17.857 "name": "BaseBdev1", 00:17:17.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.857 "is_configured": false, 00:17:17.857 "data_offset": 0, 00:17:17.857 "data_size": 0 00:17:17.857 }, 00:17:17.857 { 00:17:17.857 "name": "BaseBdev2", 00:17:17.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.857 "is_configured": false, 00:17:17.857 "data_offset": 0, 00:17:17.857 "data_size": 0 00:17:17.857 }, 00:17:17.857 { 00:17:17.858 "name": "BaseBdev3", 00:17:17.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.858 "is_configured": false, 00:17:17.858 "data_offset": 0, 00:17:17.858 "data_size": 0 00:17:17.858 } 00:17:17.858 ] 00:17:17.858 }' 00:17:17.858 16:34:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.858 16:34:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.423 16:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.683 [2024-07-24 16:34:15.308288] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.683 [2024-07-24 16:34:15.308329] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:17:18.683 16:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:18.683 [2024-07-24 16:34:15.536965] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.683 [2024-07-24 16:34:15.537008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.683 [2024-07-24 16:34:15.537021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.683 [2024-07-24 16:34:15.537041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.683 [2024-07-24 16:34:15.537053] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.683 [2024-07-24 16:34:15.537068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.942 16:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:19.201 [2024-07-24 16:34:15.815451] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.201 BaseBdev1 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:19.201 16:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.460 16:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.460 [ 00:17:19.460 { 00:17:19.460 "name": "BaseBdev1", 00:17:19.460 "aliases": [ 00:17:19.460 "256991e0-5b40-4e59-b547-6dcf5bdfc52d" 00:17:19.460 ], 00:17:19.460 "product_name": "Malloc disk", 00:17:19.460 "block_size": 512, 00:17:19.460 "num_blocks": 65536, 00:17:19.460 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:19.460 "assigned_rate_limits": { 00:17:19.460 "rw_ios_per_sec": 0, 00:17:19.461 "rw_mbytes_per_sec": 0, 00:17:19.461 "r_mbytes_per_sec": 0, 00:17:19.461 "w_mbytes_per_sec": 0 00:17:19.461 }, 00:17:19.461 "claimed": true, 00:17:19.461 "claim_type": "exclusive_write", 00:17:19.461 "zoned": false, 00:17:19.461 "supported_io_types": { 00:17:19.461 "read": true, 00:17:19.461 "write": true, 00:17:19.461 "unmap": true, 00:17:19.461 "flush": true, 00:17:19.461 "reset": true, 00:17:19.461 "nvme_admin": false, 00:17:19.461 "nvme_io": false, 00:17:19.461 "nvme_io_md": false, 00:17:19.461 "write_zeroes": true, 00:17:19.461 "zcopy": true, 00:17:19.461 "get_zone_info": false, 00:17:19.461 "zone_management": false, 00:17:19.461 "zone_append": false, 00:17:19.461 "compare": false, 00:17:19.461 "compare_and_write": false, 00:17:19.461 "abort": true, 00:17:19.461 "seek_hole": false, 00:17:19.461 "seek_data": false, 00:17:19.461 "copy": true, 00:17:19.461 "nvme_iov_md": false 00:17:19.461 }, 00:17:19.461 "memory_domains": [ 00:17:19.461 { 00:17:19.461 "dma_device_id": "system", 00:17:19.461 "dma_device_type": 1 00:17:19.461 }, 00:17:19.461 { 00:17:19.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.461 "dma_device_type": 2 00:17:19.461 } 00:17:19.461 ], 00:17:19.461 "driver_specific": {} 00:17:19.461 } 00:17:19.461 ] 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.461 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.720 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.720 "name": "Existed_Raid", 00:17:19.720 "uuid": "ed73cdb3-80af-4212-9b26-41fcdc1c9449", 00:17:19.720 "strip_size_kb": 64, 00:17:19.720 "state": "configuring", 00:17:19.720 "raid_level": "raid0", 00:17:19.720 "superblock": true, 00:17:19.720 "num_base_bdevs": 3, 00:17:19.720 "num_base_bdevs_discovered": 1, 00:17:19.720 "num_base_bdevs_operational": 3, 00:17:19.720 "base_bdevs_list": [ 00:17:19.720 { 00:17:19.720 "name": "BaseBdev1", 00:17:19.720 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:19.720 "is_configured": true, 00:17:19.720 "data_offset": 2048, 00:17:19.720 "data_size": 63488 00:17:19.720 }, 00:17:19.720 { 00:17:19.720 "name": "BaseBdev2", 00:17:19.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.720 "is_configured": false, 00:17:19.720 "data_offset": 0, 00:17:19.720 "data_size": 0 00:17:19.720 }, 00:17:19.720 { 00:17:19.720 "name": "BaseBdev3", 00:17:19.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.720 "is_configured": false, 00:17:19.720 "data_offset": 0, 00:17:19.720 "data_size": 0 00:17:19.720 } 00:17:19.720 ] 00:17:19.720 }' 00:17:19.720 16:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.720 16:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.288 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:20.547 [2024-07-24 16:34:17.295477] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.547 [2024-07-24 16:34:17.295532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:17:20.547 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:20.830 [2024-07-24 16:34:17.524211] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.830 [2024-07-24 16:34:17.526523] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.830 [2024-07-24 16:34:17.526568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.830 [2024-07-24 16:34:17.526582] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.830 [2024-07-24 16:34:17.526599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.830 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:20.830 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:20.830 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:20.830 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.830 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.830 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.831 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.095 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.095 "name": "Existed_Raid", 00:17:21.095 "uuid": "de549ec3-e7a8-4b49-a356-2770c9310421", 00:17:21.095 "strip_size_kb": 64, 00:17:21.095 "state": "configuring", 00:17:21.095 "raid_level": "raid0", 00:17:21.095 "superblock": true, 00:17:21.095 "num_base_bdevs": 3, 00:17:21.095 "num_base_bdevs_discovered": 1, 00:17:21.095 "num_base_bdevs_operational": 3, 00:17:21.095 "base_bdevs_list": [ 00:17:21.095 { 00:17:21.095 "name": "BaseBdev1", 00:17:21.095 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:21.096 "is_configured": true, 00:17:21.096 "data_offset": 2048, 00:17:21.096 "data_size": 63488 00:17:21.096 }, 00:17:21.096 { 00:17:21.096 "name": "BaseBdev2", 00:17:21.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.096 "is_configured": false, 00:17:21.096 "data_offset": 0, 00:17:21.096 "data_size": 0 00:17:21.096 }, 00:17:21.096 { 00:17:21.096 "name": "BaseBdev3", 00:17:21.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.096 "is_configured": false, 00:17:21.096 "data_offset": 0, 00:17:21.096 "data_size": 0 00:17:21.096 } 00:17:21.096 ] 00:17:21.096 }' 00:17:21.096 16:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.096 16:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.664 16:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:21.922 [2024-07-24 16:34:18.600925] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.922 BaseBdev2 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:21.922 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:22.180 16:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:22.439 [ 00:17:22.439 { 00:17:22.439 "name": "BaseBdev2", 00:17:22.439 "aliases": [ 00:17:22.439 "334b7b00-c606-4592-babf-db1c98331df8" 00:17:22.439 ], 00:17:22.439 "product_name": "Malloc disk", 00:17:22.439 "block_size": 512, 00:17:22.439 "num_blocks": 65536, 00:17:22.439 "uuid": "334b7b00-c606-4592-babf-db1c98331df8", 00:17:22.439 "assigned_rate_limits": { 00:17:22.439 "rw_ios_per_sec": 0, 00:17:22.439 "rw_mbytes_per_sec": 0, 00:17:22.439 "r_mbytes_per_sec": 0, 00:17:22.439 "w_mbytes_per_sec": 0 00:17:22.439 }, 00:17:22.439 "claimed": true, 00:17:22.439 "claim_type": "exclusive_write", 00:17:22.439 "zoned": false, 00:17:22.439 "supported_io_types": { 00:17:22.439 "read": true, 00:17:22.439 "write": true, 00:17:22.439 "unmap": true, 00:17:22.439 "flush": true, 00:17:22.439 "reset": true, 00:17:22.439 "nvme_admin": false, 00:17:22.439 "nvme_io": false, 00:17:22.439 "nvme_io_md": false, 00:17:22.439 "write_zeroes": true, 00:17:22.439 "zcopy": true, 00:17:22.439 "get_zone_info": false, 00:17:22.439 "zone_management": false, 00:17:22.439 "zone_append": false, 00:17:22.439 "compare": false, 00:17:22.439 "compare_and_write": false, 00:17:22.439 "abort": true, 00:17:22.439 "seek_hole": false, 00:17:22.439 "seek_data": false, 00:17:22.439 "copy": true, 00:17:22.439 "nvme_iov_md": false 00:17:22.439 }, 00:17:22.439 "memory_domains": [ 00:17:22.439 { 00:17:22.439 "dma_device_id": "system", 00:17:22.439 "dma_device_type": 1 00:17:22.439 }, 00:17:22.439 { 00:17:22.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.439 "dma_device_type": 2 00:17:22.439 } 00:17:22.439 ], 00:17:22.439 "driver_specific": {} 00:17:22.439 } 00:17:22.439 ] 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.439 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.439 "name": "Existed_Raid", 00:17:22.439 "uuid": "de549ec3-e7a8-4b49-a356-2770c9310421", 00:17:22.439 "strip_size_kb": 64, 00:17:22.439 "state": "configuring", 00:17:22.439 "raid_level": "raid0", 00:17:22.439 "superblock": true, 00:17:22.439 "num_base_bdevs": 3, 00:17:22.439 "num_base_bdevs_discovered": 2, 00:17:22.440 "num_base_bdevs_operational": 3, 00:17:22.440 "base_bdevs_list": [ 00:17:22.440 { 00:17:22.440 "name": "BaseBdev1", 00:17:22.440 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:22.440 "is_configured": true, 00:17:22.440 "data_offset": 2048, 00:17:22.440 "data_size": 63488 00:17:22.440 }, 00:17:22.440 { 00:17:22.440 "name": "BaseBdev2", 00:17:22.440 "uuid": "334b7b00-c606-4592-babf-db1c98331df8", 00:17:22.440 "is_configured": true, 00:17:22.440 "data_offset": 2048, 00:17:22.440 "data_size": 63488 00:17:22.440 }, 00:17:22.440 { 00:17:22.440 "name": "BaseBdev3", 00:17:22.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.440 "is_configured": false, 00:17:22.440 "data_offset": 0, 00:17:22.440 "data_size": 0 00:17:22.440 } 00:17:22.440 ] 00:17:22.440 }' 00:17:22.440 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.440 16:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.006 16:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:23.264 [2024-07-24 16:34:20.102793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.264 [2024-07-24 16:34:20.103063] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:17:23.264 [2024-07-24 16:34:20.103086] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:23.264 [2024-07-24 16:34:20.103423] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:23.264 [2024-07-24 16:34:20.103646] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:17:23.265 [2024-07-24 16:34:20.103661] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:17:23.265 [2024-07-24 16:34:20.103865] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.265 BaseBdev3 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:23.265 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.522 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:23.781 [ 00:17:23.781 { 00:17:23.781 "name": "BaseBdev3", 00:17:23.781 "aliases": [ 00:17:23.781 "fa9a1c9a-a746-4f60-952c-b574e80692ab" 00:17:23.781 ], 00:17:23.781 "product_name": "Malloc disk", 00:17:23.781 "block_size": 512, 00:17:23.781 "num_blocks": 65536, 00:17:23.781 "uuid": "fa9a1c9a-a746-4f60-952c-b574e80692ab", 00:17:23.781 "assigned_rate_limits": { 00:17:23.781 "rw_ios_per_sec": 0, 00:17:23.781 "rw_mbytes_per_sec": 0, 00:17:23.781 "r_mbytes_per_sec": 0, 00:17:23.781 "w_mbytes_per_sec": 0 00:17:23.781 }, 00:17:23.781 "claimed": true, 00:17:23.781 "claim_type": "exclusive_write", 00:17:23.781 "zoned": false, 00:17:23.781 "supported_io_types": { 00:17:23.781 "read": true, 00:17:23.781 "write": true, 00:17:23.781 "unmap": true, 00:17:23.781 "flush": true, 00:17:23.781 "reset": true, 00:17:23.781 "nvme_admin": false, 00:17:23.781 "nvme_io": false, 00:17:23.781 "nvme_io_md": false, 00:17:23.781 "write_zeroes": true, 00:17:23.781 "zcopy": true, 00:17:23.781 "get_zone_info": false, 00:17:23.781 "zone_management": false, 00:17:23.781 "zone_append": false, 00:17:23.781 "compare": false, 00:17:23.781 "compare_and_write": false, 00:17:23.781 "abort": true, 00:17:23.781 "seek_hole": false, 00:17:23.781 "seek_data": false, 00:17:23.781 "copy": true, 00:17:23.781 "nvme_iov_md": false 00:17:23.781 }, 00:17:23.781 "memory_domains": [ 00:17:23.781 { 00:17:23.781 "dma_device_id": "system", 00:17:23.781 "dma_device_type": 1 00:17:23.781 }, 00:17:23.781 { 00:17:23.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.781 "dma_device_type": 2 00:17:23.781 } 00:17:23.781 ], 00:17:23.781 "driver_specific": {} 00:17:23.781 } 00:17:23.781 ] 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.781 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.039 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.039 "name": "Existed_Raid", 00:17:24.039 "uuid": "de549ec3-e7a8-4b49-a356-2770c9310421", 00:17:24.039 "strip_size_kb": 64, 00:17:24.039 "state": "online", 00:17:24.039 "raid_level": "raid0", 00:17:24.039 "superblock": true, 00:17:24.039 "num_base_bdevs": 3, 00:17:24.039 "num_base_bdevs_discovered": 3, 00:17:24.039 "num_base_bdevs_operational": 3, 00:17:24.039 "base_bdevs_list": [ 00:17:24.039 { 00:17:24.039 "name": "BaseBdev1", 00:17:24.039 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:24.039 "is_configured": true, 00:17:24.039 "data_offset": 2048, 00:17:24.039 "data_size": 63488 00:17:24.039 }, 00:17:24.039 { 00:17:24.039 "name": "BaseBdev2", 00:17:24.039 "uuid": "334b7b00-c606-4592-babf-db1c98331df8", 00:17:24.039 "is_configured": true, 00:17:24.039 "data_offset": 2048, 00:17:24.039 "data_size": 63488 00:17:24.039 }, 00:17:24.039 { 00:17:24.039 "name": "BaseBdev3", 00:17:24.039 "uuid": "fa9a1c9a-a746-4f60-952c-b574e80692ab", 00:17:24.039 "is_configured": true, 00:17:24.039 "data_offset": 2048, 00:17:24.039 "data_size": 63488 00:17:24.039 } 00:17:24.039 ] 00:17:24.039 }' 00:17:24.039 16:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.039 16:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:24.606 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:24.865 [2024-07-24 16:34:21.587242] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.865 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:24.865 "name": "Existed_Raid", 00:17:24.865 "aliases": [ 00:17:24.865 "de549ec3-e7a8-4b49-a356-2770c9310421" 00:17:24.865 ], 00:17:24.865 "product_name": "Raid Volume", 00:17:24.865 "block_size": 512, 00:17:24.865 "num_blocks": 190464, 00:17:24.865 "uuid": "de549ec3-e7a8-4b49-a356-2770c9310421", 00:17:24.865 "assigned_rate_limits": { 00:17:24.865 "rw_ios_per_sec": 0, 00:17:24.865 "rw_mbytes_per_sec": 0, 00:17:24.865 "r_mbytes_per_sec": 0, 00:17:24.865 "w_mbytes_per_sec": 0 00:17:24.865 }, 00:17:24.865 "claimed": false, 00:17:24.865 "zoned": false, 00:17:24.865 "supported_io_types": { 00:17:24.865 "read": true, 00:17:24.865 "write": true, 00:17:24.865 "unmap": true, 00:17:24.865 "flush": true, 00:17:24.865 "reset": true, 00:17:24.865 "nvme_admin": false, 00:17:24.865 "nvme_io": false, 00:17:24.865 "nvme_io_md": false, 00:17:24.865 "write_zeroes": true, 00:17:24.865 "zcopy": false, 00:17:24.865 "get_zone_info": false, 00:17:24.865 "zone_management": false, 00:17:24.865 "zone_append": false, 00:17:24.865 "compare": false, 00:17:24.865 "compare_and_write": false, 00:17:24.865 "abort": false, 00:17:24.865 "seek_hole": false, 00:17:24.865 "seek_data": false, 00:17:24.865 "copy": false, 00:17:24.865 "nvme_iov_md": false 00:17:24.865 }, 00:17:24.865 "memory_domains": [ 00:17:24.865 { 00:17:24.865 "dma_device_id": "system", 00:17:24.865 "dma_device_type": 1 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.865 "dma_device_type": 2 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "dma_device_id": "system", 00:17:24.865 "dma_device_type": 1 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.865 "dma_device_type": 2 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "dma_device_id": "system", 00:17:24.865 "dma_device_type": 1 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.865 "dma_device_type": 2 00:17:24.865 } 00:17:24.865 ], 00:17:24.865 "driver_specific": { 00:17:24.865 "raid": { 00:17:24.865 "uuid": "de549ec3-e7a8-4b49-a356-2770c9310421", 00:17:24.865 "strip_size_kb": 64, 00:17:24.865 "state": "online", 00:17:24.865 "raid_level": "raid0", 00:17:24.865 "superblock": true, 00:17:24.865 "num_base_bdevs": 3, 00:17:24.865 "num_base_bdevs_discovered": 3, 00:17:24.865 "num_base_bdevs_operational": 3, 00:17:24.865 "base_bdevs_list": [ 00:17:24.865 { 00:17:24.865 "name": "BaseBdev1", 00:17:24.865 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:24.865 "is_configured": true, 00:17:24.865 "data_offset": 2048, 00:17:24.865 "data_size": 63488 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "name": "BaseBdev2", 00:17:24.865 "uuid": "334b7b00-c606-4592-babf-db1c98331df8", 00:17:24.865 "is_configured": true, 00:17:24.865 "data_offset": 2048, 00:17:24.865 "data_size": 63488 00:17:24.865 }, 00:17:24.865 { 00:17:24.865 "name": "BaseBdev3", 00:17:24.865 "uuid": "fa9a1c9a-a746-4f60-952c-b574e80692ab", 00:17:24.865 "is_configured": true, 00:17:24.865 "data_offset": 2048, 00:17:24.865 "data_size": 63488 00:17:24.865 } 00:17:24.865 ] 00:17:24.865 } 00:17:24.865 } 00:17:24.865 }' 00:17:24.865 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.865 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:24.865 BaseBdev2 00:17:24.865 BaseBdev3' 00:17:24.865 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:24.865 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:24.865 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:25.124 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:25.124 "name": "BaseBdev1", 00:17:25.124 "aliases": [ 00:17:25.124 "256991e0-5b40-4e59-b547-6dcf5bdfc52d" 00:17:25.124 ], 00:17:25.124 "product_name": "Malloc disk", 00:17:25.124 "block_size": 512, 00:17:25.124 "num_blocks": 65536, 00:17:25.124 "uuid": "256991e0-5b40-4e59-b547-6dcf5bdfc52d", 00:17:25.124 "assigned_rate_limits": { 00:17:25.124 "rw_ios_per_sec": 0, 00:17:25.124 "rw_mbytes_per_sec": 0, 00:17:25.124 "r_mbytes_per_sec": 0, 00:17:25.124 "w_mbytes_per_sec": 0 00:17:25.124 }, 00:17:25.124 "claimed": true, 00:17:25.124 "claim_type": "exclusive_write", 00:17:25.124 "zoned": false, 00:17:25.124 "supported_io_types": { 00:17:25.124 "read": true, 00:17:25.124 "write": true, 00:17:25.124 "unmap": true, 00:17:25.124 "flush": true, 00:17:25.124 "reset": true, 00:17:25.124 "nvme_admin": false, 00:17:25.124 "nvme_io": false, 00:17:25.124 "nvme_io_md": false, 00:17:25.124 "write_zeroes": true, 00:17:25.124 "zcopy": true, 00:17:25.124 "get_zone_info": false, 00:17:25.124 "zone_management": false, 00:17:25.124 "zone_append": false, 00:17:25.124 "compare": false, 00:17:25.124 "compare_and_write": false, 00:17:25.124 "abort": true, 00:17:25.124 "seek_hole": false, 00:17:25.124 "seek_data": false, 00:17:25.124 "copy": true, 00:17:25.124 "nvme_iov_md": false 00:17:25.124 }, 00:17:25.124 "memory_domains": [ 00:17:25.124 { 00:17:25.124 "dma_device_id": "system", 00:17:25.124 "dma_device_type": 1 00:17:25.124 }, 00:17:25.124 { 00:17:25.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.124 "dma_device_type": 2 00:17:25.124 } 00:17:25.124 ], 00:17:25.124 "driver_specific": {} 00:17:25.124 }' 00:17:25.124 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:25.124 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:25.124 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:25.124 16:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:25.382 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:25.640 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:25.640 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:25.640 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:25.640 "name": "BaseBdev2", 00:17:25.640 "aliases": [ 00:17:25.640 "334b7b00-c606-4592-babf-db1c98331df8" 00:17:25.640 ], 00:17:25.640 "product_name": "Malloc disk", 00:17:25.640 "block_size": 512, 00:17:25.640 "num_blocks": 65536, 00:17:25.640 "uuid": "334b7b00-c606-4592-babf-db1c98331df8", 00:17:25.640 "assigned_rate_limits": { 00:17:25.640 "rw_ios_per_sec": 0, 00:17:25.640 "rw_mbytes_per_sec": 0, 00:17:25.640 "r_mbytes_per_sec": 0, 00:17:25.640 "w_mbytes_per_sec": 0 00:17:25.640 }, 00:17:25.640 "claimed": true, 00:17:25.640 "claim_type": "exclusive_write", 00:17:25.640 "zoned": false, 00:17:25.640 "supported_io_types": { 00:17:25.640 "read": true, 00:17:25.640 "write": true, 00:17:25.640 "unmap": true, 00:17:25.640 "flush": true, 00:17:25.640 "reset": true, 00:17:25.640 "nvme_admin": false, 00:17:25.640 "nvme_io": false, 00:17:25.640 "nvme_io_md": false, 00:17:25.640 "write_zeroes": true, 00:17:25.640 "zcopy": true, 00:17:25.640 "get_zone_info": false, 00:17:25.640 "zone_management": false, 00:17:25.640 "zone_append": false, 00:17:25.640 "compare": false, 00:17:25.640 "compare_and_write": false, 00:17:25.640 "abort": true, 00:17:25.640 "seek_hole": false, 00:17:25.640 "seek_data": false, 00:17:25.640 "copy": true, 00:17:25.640 "nvme_iov_md": false 00:17:25.640 }, 00:17:25.640 "memory_domains": [ 00:17:25.640 { 00:17:25.640 "dma_device_id": "system", 00:17:25.640 "dma_device_type": 1 00:17:25.640 }, 00:17:25.640 { 00:17:25.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.640 "dma_device_type": 2 00:17:25.640 } 00:17:25.640 ], 00:17:25.640 "driver_specific": {} 00:17:25.640 }' 00:17:25.640 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:25.899 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.157 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.157 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:26.157 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.157 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:26.157 16:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.416 "name": "BaseBdev3", 00:17:26.416 "aliases": [ 00:17:26.416 "fa9a1c9a-a746-4f60-952c-b574e80692ab" 00:17:26.416 ], 00:17:26.416 "product_name": "Malloc disk", 00:17:26.416 "block_size": 512, 00:17:26.416 "num_blocks": 65536, 00:17:26.416 "uuid": "fa9a1c9a-a746-4f60-952c-b574e80692ab", 00:17:26.416 "assigned_rate_limits": { 00:17:26.416 "rw_ios_per_sec": 0, 00:17:26.416 "rw_mbytes_per_sec": 0, 00:17:26.416 "r_mbytes_per_sec": 0, 00:17:26.416 "w_mbytes_per_sec": 0 00:17:26.416 }, 00:17:26.416 "claimed": true, 00:17:26.416 "claim_type": "exclusive_write", 00:17:26.416 "zoned": false, 00:17:26.416 "supported_io_types": { 00:17:26.416 "read": true, 00:17:26.416 "write": true, 00:17:26.416 "unmap": true, 00:17:26.416 "flush": true, 00:17:26.416 "reset": true, 00:17:26.416 "nvme_admin": false, 00:17:26.416 "nvme_io": false, 00:17:26.416 "nvme_io_md": false, 00:17:26.416 "write_zeroes": true, 00:17:26.416 "zcopy": true, 00:17:26.416 "get_zone_info": false, 00:17:26.416 "zone_management": false, 00:17:26.416 "zone_append": false, 00:17:26.416 "compare": false, 00:17:26.416 "compare_and_write": false, 00:17:26.416 "abort": true, 00:17:26.416 "seek_hole": false, 00:17:26.416 "seek_data": false, 00:17:26.416 "copy": true, 00:17:26.416 "nvme_iov_md": false 00:17:26.416 }, 00:17:26.416 "memory_domains": [ 00:17:26.416 { 00:17:26.416 "dma_device_id": "system", 00:17:26.416 "dma_device_type": 1 00:17:26.416 }, 00:17:26.416 { 00:17:26.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.416 "dma_device_type": 2 00:17:26.416 } 00:17:26.416 ], 00:17:26.416 "driver_specific": {} 00:17:26.416 }' 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.416 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.675 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:26.675 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.675 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.675 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:26.675 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.933 [2024-07-24 16:34:23.612416] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.933 [2024-07-24 16:34:23.612447] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.933 [2024-07-24 16:34:23.612508] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.933 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.192 "name": "Existed_Raid", 00:17:27.192 "uuid": "de549ec3-e7a8-4b49-a356-2770c9310421", 00:17:27.192 "strip_size_kb": 64, 00:17:27.192 "state": "offline", 00:17:27.192 "raid_level": "raid0", 00:17:27.192 "superblock": true, 00:17:27.192 "num_base_bdevs": 3, 00:17:27.192 "num_base_bdevs_discovered": 2, 00:17:27.192 "num_base_bdevs_operational": 2, 00:17:27.192 "base_bdevs_list": [ 00:17:27.192 { 00:17:27.192 "name": null, 00:17:27.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.192 "is_configured": false, 00:17:27.192 "data_offset": 2048, 00:17:27.192 "data_size": 63488 00:17:27.192 }, 00:17:27.192 { 00:17:27.192 "name": "BaseBdev2", 00:17:27.192 "uuid": "334b7b00-c606-4592-babf-db1c98331df8", 00:17:27.192 "is_configured": true, 00:17:27.192 "data_offset": 2048, 00:17:27.192 "data_size": 63488 00:17:27.192 }, 00:17:27.192 { 00:17:27.192 "name": "BaseBdev3", 00:17:27.192 "uuid": "fa9a1c9a-a746-4f60-952c-b574e80692ab", 00:17:27.192 "is_configured": true, 00:17:27.192 "data_offset": 2048, 00:17:27.192 "data_size": 63488 00:17:27.192 } 00:17:27.192 ] 00:17:27.192 }' 00:17:27.192 16:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.192 16:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.759 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:27.759 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:27.759 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.759 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:28.017 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:28.017 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.018 16:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:28.276 [2024-07-24 16:34:24.912893] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:28.276 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:28.276 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:28.276 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.276 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:28.535 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:28.535 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:28.535 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.793 [2024-07-24 16:34:25.503291] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.793 [2024-07-24 16:34:25.503346] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:29.052 16:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:29.310 BaseBdev2 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:29.310 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.568 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:29.827 [ 00:17:29.827 { 00:17:29.827 "name": "BaseBdev2", 00:17:29.827 "aliases": [ 00:17:29.827 "f293b4cc-485d-4a16-952e-7e7b215460ac" 00:17:29.827 ], 00:17:29.827 "product_name": "Malloc disk", 00:17:29.827 "block_size": 512, 00:17:29.827 "num_blocks": 65536, 00:17:29.827 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:29.827 "assigned_rate_limits": { 00:17:29.827 "rw_ios_per_sec": 0, 00:17:29.827 "rw_mbytes_per_sec": 0, 00:17:29.827 "r_mbytes_per_sec": 0, 00:17:29.827 "w_mbytes_per_sec": 0 00:17:29.827 }, 00:17:29.827 "claimed": false, 00:17:29.827 "zoned": false, 00:17:29.827 "supported_io_types": { 00:17:29.827 "read": true, 00:17:29.827 "write": true, 00:17:29.827 "unmap": true, 00:17:29.827 "flush": true, 00:17:29.827 "reset": true, 00:17:29.827 "nvme_admin": false, 00:17:29.827 "nvme_io": false, 00:17:29.827 "nvme_io_md": false, 00:17:29.827 "write_zeroes": true, 00:17:29.827 "zcopy": true, 00:17:29.827 "get_zone_info": false, 00:17:29.827 "zone_management": false, 00:17:29.827 "zone_append": false, 00:17:29.827 "compare": false, 00:17:29.827 "compare_and_write": false, 00:17:29.827 "abort": true, 00:17:29.827 "seek_hole": false, 00:17:29.827 "seek_data": false, 00:17:29.827 "copy": true, 00:17:29.827 "nvme_iov_md": false 00:17:29.827 }, 00:17:29.827 "memory_domains": [ 00:17:29.827 { 00:17:29.827 "dma_device_id": "system", 00:17:29.827 "dma_device_type": 1 00:17:29.827 }, 00:17:29.827 { 00:17:29.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:29.827 "dma_device_type": 2 00:17:29.827 } 00:17:29.827 ], 00:17:29.827 "driver_specific": {} 00:17:29.827 } 00:17:29.827 ] 00:17:29.827 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:29.827 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:29.827 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:29.827 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:30.086 BaseBdev3 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:30.086 16:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.345 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:30.604 [ 00:17:30.604 { 00:17:30.604 "name": "BaseBdev3", 00:17:30.604 "aliases": [ 00:17:30.604 "19bd1e5d-fff1-4f44-8f97-3620853832df" 00:17:30.604 ], 00:17:30.604 "product_name": "Malloc disk", 00:17:30.604 "block_size": 512, 00:17:30.604 "num_blocks": 65536, 00:17:30.604 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:30.604 "assigned_rate_limits": { 00:17:30.604 "rw_ios_per_sec": 0, 00:17:30.604 "rw_mbytes_per_sec": 0, 00:17:30.604 "r_mbytes_per_sec": 0, 00:17:30.604 "w_mbytes_per_sec": 0 00:17:30.604 }, 00:17:30.604 "claimed": false, 00:17:30.604 "zoned": false, 00:17:30.604 "supported_io_types": { 00:17:30.604 "read": true, 00:17:30.604 "write": true, 00:17:30.604 "unmap": true, 00:17:30.604 "flush": true, 00:17:30.604 "reset": true, 00:17:30.604 "nvme_admin": false, 00:17:30.604 "nvme_io": false, 00:17:30.604 "nvme_io_md": false, 00:17:30.604 "write_zeroes": true, 00:17:30.604 "zcopy": true, 00:17:30.604 "get_zone_info": false, 00:17:30.604 "zone_management": false, 00:17:30.604 "zone_append": false, 00:17:30.604 "compare": false, 00:17:30.604 "compare_and_write": false, 00:17:30.604 "abort": true, 00:17:30.604 "seek_hole": false, 00:17:30.604 "seek_data": false, 00:17:30.604 "copy": true, 00:17:30.604 "nvme_iov_md": false 00:17:30.604 }, 00:17:30.604 "memory_domains": [ 00:17:30.604 { 00:17:30.604 "dma_device_id": "system", 00:17:30.604 "dma_device_type": 1 00:17:30.604 }, 00:17:30.604 { 00:17:30.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.604 "dma_device_type": 2 00:17:30.604 } 00:17:30.604 ], 00:17:30.604 "driver_specific": {} 00:17:30.604 } 00:17:30.604 ] 00:17:30.604 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:30.604 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:30.604 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:30.604 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:30.862 [2024-07-24 16:34:27.513931] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.862 [2024-07-24 16:34:27.513978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.862 [2024-07-24 16:34:27.514006] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.862 [2024-07-24 16:34:27.516293] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.862 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.121 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.121 "name": "Existed_Raid", 00:17:31.121 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:31.121 "strip_size_kb": 64, 00:17:31.121 "state": "configuring", 00:17:31.121 "raid_level": "raid0", 00:17:31.121 "superblock": true, 00:17:31.121 "num_base_bdevs": 3, 00:17:31.121 "num_base_bdevs_discovered": 2, 00:17:31.121 "num_base_bdevs_operational": 3, 00:17:31.121 "base_bdevs_list": [ 00:17:31.121 { 00:17:31.121 "name": "BaseBdev1", 00:17:31.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.121 "is_configured": false, 00:17:31.121 "data_offset": 0, 00:17:31.121 "data_size": 0 00:17:31.121 }, 00:17:31.121 { 00:17:31.121 "name": "BaseBdev2", 00:17:31.121 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:31.121 "is_configured": true, 00:17:31.121 "data_offset": 2048, 00:17:31.121 "data_size": 63488 00:17:31.121 }, 00:17:31.121 { 00:17:31.121 "name": "BaseBdev3", 00:17:31.121 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:31.121 "is_configured": true, 00:17:31.121 "data_offset": 2048, 00:17:31.121 "data_size": 63488 00:17:31.121 } 00:17:31.121 ] 00:17:31.121 }' 00:17:31.121 16:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.121 16:34:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.687 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:31.687 [2024-07-24 16:34:28.532674] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.946 "name": "Existed_Raid", 00:17:31.946 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:31.946 "strip_size_kb": 64, 00:17:31.946 "state": "configuring", 00:17:31.946 "raid_level": "raid0", 00:17:31.946 "superblock": true, 00:17:31.946 "num_base_bdevs": 3, 00:17:31.946 "num_base_bdevs_discovered": 1, 00:17:31.946 "num_base_bdevs_operational": 3, 00:17:31.946 "base_bdevs_list": [ 00:17:31.946 { 00:17:31.946 "name": "BaseBdev1", 00:17:31.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.946 "is_configured": false, 00:17:31.946 "data_offset": 0, 00:17:31.946 "data_size": 0 00:17:31.946 }, 00:17:31.946 { 00:17:31.946 "name": null, 00:17:31.946 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:31.946 "is_configured": false, 00:17:31.946 "data_offset": 2048, 00:17:31.946 "data_size": 63488 00:17:31.946 }, 00:17:31.946 { 00:17:31.946 "name": "BaseBdev3", 00:17:31.946 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:31.946 "is_configured": true, 00:17:31.946 "data_offset": 2048, 00:17:31.946 "data_size": 63488 00:17:31.946 } 00:17:31.946 ] 00:17:31.946 }' 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.946 16:34:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:32.512 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.512 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:32.771 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:32.771 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.029 [2024-07-24 16:34:29.823769] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.029 BaseBdev1 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:33.029 16:34:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.287 16:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:33.585 [ 00:17:33.585 { 00:17:33.585 "name": "BaseBdev1", 00:17:33.585 "aliases": [ 00:17:33.585 "c76a4e98-977a-4bab-9e2e-86c4c719c577" 00:17:33.585 ], 00:17:33.585 "product_name": "Malloc disk", 00:17:33.585 "block_size": 512, 00:17:33.585 "num_blocks": 65536, 00:17:33.585 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:33.585 "assigned_rate_limits": { 00:17:33.585 "rw_ios_per_sec": 0, 00:17:33.585 "rw_mbytes_per_sec": 0, 00:17:33.585 "r_mbytes_per_sec": 0, 00:17:33.585 "w_mbytes_per_sec": 0 00:17:33.585 }, 00:17:33.585 "claimed": true, 00:17:33.585 "claim_type": "exclusive_write", 00:17:33.585 "zoned": false, 00:17:33.585 "supported_io_types": { 00:17:33.585 "read": true, 00:17:33.585 "write": true, 00:17:33.585 "unmap": true, 00:17:33.585 "flush": true, 00:17:33.585 "reset": true, 00:17:33.585 "nvme_admin": false, 00:17:33.585 "nvme_io": false, 00:17:33.585 "nvme_io_md": false, 00:17:33.585 "write_zeroes": true, 00:17:33.585 "zcopy": true, 00:17:33.585 "get_zone_info": false, 00:17:33.585 "zone_management": false, 00:17:33.585 "zone_append": false, 00:17:33.585 "compare": false, 00:17:33.585 "compare_and_write": false, 00:17:33.586 "abort": true, 00:17:33.586 "seek_hole": false, 00:17:33.586 "seek_data": false, 00:17:33.586 "copy": true, 00:17:33.586 "nvme_iov_md": false 00:17:33.586 }, 00:17:33.586 "memory_domains": [ 00:17:33.586 { 00:17:33.586 "dma_device_id": "system", 00:17:33.586 "dma_device_type": 1 00:17:33.586 }, 00:17:33.586 { 00:17:33.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.586 "dma_device_type": 2 00:17:33.586 } 00:17:33.586 ], 00:17:33.586 "driver_specific": {} 00:17:33.586 } 00:17:33.586 ] 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.586 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.845 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.845 "name": "Existed_Raid", 00:17:33.845 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:33.845 "strip_size_kb": 64, 00:17:33.845 "state": "configuring", 00:17:33.845 "raid_level": "raid0", 00:17:33.845 "superblock": true, 00:17:33.845 "num_base_bdevs": 3, 00:17:33.845 "num_base_bdevs_discovered": 2, 00:17:33.845 "num_base_bdevs_operational": 3, 00:17:33.845 "base_bdevs_list": [ 00:17:33.845 { 00:17:33.845 "name": "BaseBdev1", 00:17:33.845 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:33.846 "is_configured": true, 00:17:33.846 "data_offset": 2048, 00:17:33.846 "data_size": 63488 00:17:33.846 }, 00:17:33.846 { 00:17:33.846 "name": null, 00:17:33.846 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:33.846 "is_configured": false, 00:17:33.846 "data_offset": 2048, 00:17:33.846 "data_size": 63488 00:17:33.846 }, 00:17:33.846 { 00:17:33.846 "name": "BaseBdev3", 00:17:33.846 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:33.846 "is_configured": true, 00:17:33.846 "data_offset": 2048, 00:17:33.846 "data_size": 63488 00:17:33.846 } 00:17:33.846 ] 00:17:33.846 }' 00:17:33.846 16:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.846 16:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.412 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.412 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:34.670 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:34.670 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:34.670 [2024-07-24 16:34:31.524642] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.929 "name": "Existed_Raid", 00:17:34.929 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:34.929 "strip_size_kb": 64, 00:17:34.929 "state": "configuring", 00:17:34.929 "raid_level": "raid0", 00:17:34.929 "superblock": true, 00:17:34.929 "num_base_bdevs": 3, 00:17:34.929 "num_base_bdevs_discovered": 1, 00:17:34.929 "num_base_bdevs_operational": 3, 00:17:34.929 "base_bdevs_list": [ 00:17:34.929 { 00:17:34.929 "name": "BaseBdev1", 00:17:34.929 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:34.929 "is_configured": true, 00:17:34.929 "data_offset": 2048, 00:17:34.929 "data_size": 63488 00:17:34.929 }, 00:17:34.929 { 00:17:34.929 "name": null, 00:17:34.929 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:34.929 "is_configured": false, 00:17:34.929 "data_offset": 2048, 00:17:34.929 "data_size": 63488 00:17:34.929 }, 00:17:34.929 { 00:17:34.929 "name": null, 00:17:34.929 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:34.929 "is_configured": false, 00:17:34.929 "data_offset": 2048, 00:17:34.929 "data_size": 63488 00:17:34.929 } 00:17:34.929 ] 00:17:34.929 }' 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.929 16:34:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.864 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.864 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:35.864 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:35.864 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:36.122 [2024-07-24 16:34:32.800102] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.122 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:36.122 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.122 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.123 16:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.381 16:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.381 "name": "Existed_Raid", 00:17:36.381 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:36.381 "strip_size_kb": 64, 00:17:36.381 "state": "configuring", 00:17:36.381 "raid_level": "raid0", 00:17:36.381 "superblock": true, 00:17:36.381 "num_base_bdevs": 3, 00:17:36.381 "num_base_bdevs_discovered": 2, 00:17:36.381 "num_base_bdevs_operational": 3, 00:17:36.381 "base_bdevs_list": [ 00:17:36.381 { 00:17:36.381 "name": "BaseBdev1", 00:17:36.381 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:36.381 "is_configured": true, 00:17:36.381 "data_offset": 2048, 00:17:36.381 "data_size": 63488 00:17:36.381 }, 00:17:36.381 { 00:17:36.381 "name": null, 00:17:36.381 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:36.381 "is_configured": false, 00:17:36.381 "data_offset": 2048, 00:17:36.381 "data_size": 63488 00:17:36.381 }, 00:17:36.381 { 00:17:36.381 "name": "BaseBdev3", 00:17:36.381 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:36.381 "is_configured": true, 00:17:36.381 "data_offset": 2048, 00:17:36.381 "data_size": 63488 00:17:36.381 } 00:17:36.381 ] 00:17:36.381 }' 00:17:36.381 16:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.381 16:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.949 16:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:36.949 16:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.949 16:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:36.949 16:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:37.207 [2024-07-24 16:34:34.011370] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.465 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.724 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.724 "name": "Existed_Raid", 00:17:37.724 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:37.724 "strip_size_kb": 64, 00:17:37.724 "state": "configuring", 00:17:37.724 "raid_level": "raid0", 00:17:37.724 "superblock": true, 00:17:37.724 "num_base_bdevs": 3, 00:17:37.724 "num_base_bdevs_discovered": 1, 00:17:37.724 "num_base_bdevs_operational": 3, 00:17:37.724 "base_bdevs_list": [ 00:17:37.724 { 00:17:37.724 "name": null, 00:17:37.724 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:37.724 "is_configured": false, 00:17:37.724 "data_offset": 2048, 00:17:37.724 "data_size": 63488 00:17:37.724 }, 00:17:37.724 { 00:17:37.724 "name": null, 00:17:37.724 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:37.724 "is_configured": false, 00:17:37.724 "data_offset": 2048, 00:17:37.724 "data_size": 63488 00:17:37.724 }, 00:17:37.724 { 00:17:37.724 "name": "BaseBdev3", 00:17:37.724 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:37.724 "is_configured": true, 00:17:37.724 "data_offset": 2048, 00:17:37.724 "data_size": 63488 00:17:37.724 } 00:17:37.724 ] 00:17:37.724 }' 00:17:37.724 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.724 16:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:38.290 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.290 16:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:38.548 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:38.548 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:38.807 [2024-07-24 16:34:35.421569] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.807 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.065 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.065 "name": "Existed_Raid", 00:17:39.065 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:39.065 "strip_size_kb": 64, 00:17:39.065 "state": "configuring", 00:17:39.065 "raid_level": "raid0", 00:17:39.065 "superblock": true, 00:17:39.065 "num_base_bdevs": 3, 00:17:39.065 "num_base_bdevs_discovered": 2, 00:17:39.065 "num_base_bdevs_operational": 3, 00:17:39.065 "base_bdevs_list": [ 00:17:39.065 { 00:17:39.065 "name": null, 00:17:39.065 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:39.065 "is_configured": false, 00:17:39.065 "data_offset": 2048, 00:17:39.065 "data_size": 63488 00:17:39.065 }, 00:17:39.065 { 00:17:39.065 "name": "BaseBdev2", 00:17:39.065 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:39.065 "is_configured": true, 00:17:39.065 "data_offset": 2048, 00:17:39.065 "data_size": 63488 00:17:39.065 }, 00:17:39.065 { 00:17:39.065 "name": "BaseBdev3", 00:17:39.065 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:39.065 "is_configured": true, 00:17:39.065 "data_offset": 2048, 00:17:39.065 "data_size": 63488 00:17:39.065 } 00:17:39.065 ] 00:17:39.065 }' 00:17:39.065 16:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.065 16:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.323 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.323 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:39.581 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:39.581 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.581 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:39.839 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c76a4e98-977a-4bab-9e2e-86c4c719c577 00:17:40.098 [2024-07-24 16:34:36.876607] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:40.098 [2024-07-24 16:34:36.876833] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:17:40.098 [2024-07-24 16:34:36.876856] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.098 [2024-07-24 16:34:36.877164] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:40.098 [2024-07-24 16:34:36.877378] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:17:40.098 [2024-07-24 16:34:36.877393] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:17:40.098 NewBaseBdev 00:17:40.098 [2024-07-24 16:34:36.877565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:40.098 16:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.356 16:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:40.615 [ 00:17:40.615 { 00:17:40.615 "name": "NewBaseBdev", 00:17:40.615 "aliases": [ 00:17:40.615 "c76a4e98-977a-4bab-9e2e-86c4c719c577" 00:17:40.615 ], 00:17:40.615 "product_name": "Malloc disk", 00:17:40.615 "block_size": 512, 00:17:40.615 "num_blocks": 65536, 00:17:40.615 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:40.615 "assigned_rate_limits": { 00:17:40.615 "rw_ios_per_sec": 0, 00:17:40.615 "rw_mbytes_per_sec": 0, 00:17:40.615 "r_mbytes_per_sec": 0, 00:17:40.615 "w_mbytes_per_sec": 0 00:17:40.615 }, 00:17:40.615 "claimed": true, 00:17:40.615 "claim_type": "exclusive_write", 00:17:40.615 "zoned": false, 00:17:40.615 "supported_io_types": { 00:17:40.615 "read": true, 00:17:40.615 "write": true, 00:17:40.615 "unmap": true, 00:17:40.615 "flush": true, 00:17:40.615 "reset": true, 00:17:40.615 "nvme_admin": false, 00:17:40.615 "nvme_io": false, 00:17:40.615 "nvme_io_md": false, 00:17:40.615 "write_zeroes": true, 00:17:40.615 "zcopy": true, 00:17:40.615 "get_zone_info": false, 00:17:40.615 "zone_management": false, 00:17:40.615 "zone_append": false, 00:17:40.615 "compare": false, 00:17:40.615 "compare_and_write": false, 00:17:40.615 "abort": true, 00:17:40.615 "seek_hole": false, 00:17:40.615 "seek_data": false, 00:17:40.615 "copy": true, 00:17:40.615 "nvme_iov_md": false 00:17:40.615 }, 00:17:40.615 "memory_domains": [ 00:17:40.615 { 00:17:40.615 "dma_device_id": "system", 00:17:40.615 "dma_device_type": 1 00:17:40.615 }, 00:17:40.615 { 00:17:40.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.615 "dma_device_type": 2 00:17:40.615 } 00:17:40.615 ], 00:17:40.615 "driver_specific": {} 00:17:40.615 } 00:17:40.615 ] 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.615 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.874 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.874 "name": "Existed_Raid", 00:17:40.874 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:40.874 "strip_size_kb": 64, 00:17:40.874 "state": "online", 00:17:40.874 "raid_level": "raid0", 00:17:40.874 "superblock": true, 00:17:40.874 "num_base_bdevs": 3, 00:17:40.874 "num_base_bdevs_discovered": 3, 00:17:40.874 "num_base_bdevs_operational": 3, 00:17:40.874 "base_bdevs_list": [ 00:17:40.874 { 00:17:40.874 "name": "NewBaseBdev", 00:17:40.874 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 }, 00:17:40.874 { 00:17:40.874 "name": "BaseBdev2", 00:17:40.874 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 }, 00:17:40.874 { 00:17:40.874 "name": "BaseBdev3", 00:17:40.874 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:40.874 "is_configured": true, 00:17:40.874 "data_offset": 2048, 00:17:40.874 "data_size": 63488 00:17:40.874 } 00:17:40.874 ] 00:17:40.874 }' 00:17:40.874 16:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.874 16:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:41.440 [2024-07-24 16:34:38.228777] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:41.440 "name": "Existed_Raid", 00:17:41.440 "aliases": [ 00:17:41.440 "2046ed14-fd6f-4e73-9e6b-9cde203a64ab" 00:17:41.440 ], 00:17:41.440 "product_name": "Raid Volume", 00:17:41.440 "block_size": 512, 00:17:41.440 "num_blocks": 190464, 00:17:41.440 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:41.440 "assigned_rate_limits": { 00:17:41.440 "rw_ios_per_sec": 0, 00:17:41.440 "rw_mbytes_per_sec": 0, 00:17:41.440 "r_mbytes_per_sec": 0, 00:17:41.440 "w_mbytes_per_sec": 0 00:17:41.440 }, 00:17:41.440 "claimed": false, 00:17:41.440 "zoned": false, 00:17:41.440 "supported_io_types": { 00:17:41.440 "read": true, 00:17:41.440 "write": true, 00:17:41.440 "unmap": true, 00:17:41.440 "flush": true, 00:17:41.440 "reset": true, 00:17:41.440 "nvme_admin": false, 00:17:41.440 "nvme_io": false, 00:17:41.440 "nvme_io_md": false, 00:17:41.440 "write_zeroes": true, 00:17:41.440 "zcopy": false, 00:17:41.440 "get_zone_info": false, 00:17:41.440 "zone_management": false, 00:17:41.440 "zone_append": false, 00:17:41.440 "compare": false, 00:17:41.440 "compare_and_write": false, 00:17:41.440 "abort": false, 00:17:41.440 "seek_hole": false, 00:17:41.440 "seek_data": false, 00:17:41.440 "copy": false, 00:17:41.440 "nvme_iov_md": false 00:17:41.440 }, 00:17:41.440 "memory_domains": [ 00:17:41.440 { 00:17:41.440 "dma_device_id": "system", 00:17:41.440 "dma_device_type": 1 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.440 "dma_device_type": 2 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "dma_device_id": "system", 00:17:41.440 "dma_device_type": 1 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.440 "dma_device_type": 2 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "dma_device_id": "system", 00:17:41.440 "dma_device_type": 1 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.440 "dma_device_type": 2 00:17:41.440 } 00:17:41.440 ], 00:17:41.440 "driver_specific": { 00:17:41.440 "raid": { 00:17:41.440 "uuid": "2046ed14-fd6f-4e73-9e6b-9cde203a64ab", 00:17:41.440 "strip_size_kb": 64, 00:17:41.440 "state": "online", 00:17:41.440 "raid_level": "raid0", 00:17:41.440 "superblock": true, 00:17:41.440 "num_base_bdevs": 3, 00:17:41.440 "num_base_bdevs_discovered": 3, 00:17:41.440 "num_base_bdevs_operational": 3, 00:17:41.440 "base_bdevs_list": [ 00:17:41.440 { 00:17:41.440 "name": "NewBaseBdev", 00:17:41.440 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:41.440 "is_configured": true, 00:17:41.440 "data_offset": 2048, 00:17:41.440 "data_size": 63488 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "name": "BaseBdev2", 00:17:41.440 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:41.440 "is_configured": true, 00:17:41.440 "data_offset": 2048, 00:17:41.440 "data_size": 63488 00:17:41.440 }, 00:17:41.440 { 00:17:41.440 "name": "BaseBdev3", 00:17:41.440 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:41.440 "is_configured": true, 00:17:41.440 "data_offset": 2048, 00:17:41.440 "data_size": 63488 00:17:41.440 } 00:17:41.440 ] 00:17:41.440 } 00:17:41.440 } 00:17:41.440 }' 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:41.440 BaseBdev2 00:17:41.440 BaseBdev3' 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:41.440 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:41.698 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:41.698 "name": "NewBaseBdev", 00:17:41.699 "aliases": [ 00:17:41.699 "c76a4e98-977a-4bab-9e2e-86c4c719c577" 00:17:41.699 ], 00:17:41.699 "product_name": "Malloc disk", 00:17:41.699 "block_size": 512, 00:17:41.699 "num_blocks": 65536, 00:17:41.699 "uuid": "c76a4e98-977a-4bab-9e2e-86c4c719c577", 00:17:41.699 "assigned_rate_limits": { 00:17:41.699 "rw_ios_per_sec": 0, 00:17:41.699 "rw_mbytes_per_sec": 0, 00:17:41.699 "r_mbytes_per_sec": 0, 00:17:41.699 "w_mbytes_per_sec": 0 00:17:41.699 }, 00:17:41.699 "claimed": true, 00:17:41.699 "claim_type": "exclusive_write", 00:17:41.699 "zoned": false, 00:17:41.699 "supported_io_types": { 00:17:41.699 "read": true, 00:17:41.699 "write": true, 00:17:41.699 "unmap": true, 00:17:41.699 "flush": true, 00:17:41.699 "reset": true, 00:17:41.699 "nvme_admin": false, 00:17:41.699 "nvme_io": false, 00:17:41.699 "nvme_io_md": false, 00:17:41.699 "write_zeroes": true, 00:17:41.699 "zcopy": true, 00:17:41.699 "get_zone_info": false, 00:17:41.699 "zone_management": false, 00:17:41.699 "zone_append": false, 00:17:41.699 "compare": false, 00:17:41.699 "compare_and_write": false, 00:17:41.699 "abort": true, 00:17:41.699 "seek_hole": false, 00:17:41.699 "seek_data": false, 00:17:41.699 "copy": true, 00:17:41.699 "nvme_iov_md": false 00:17:41.699 }, 00:17:41.699 "memory_domains": [ 00:17:41.699 { 00:17:41.699 "dma_device_id": "system", 00:17:41.699 "dma_device_type": 1 00:17:41.699 }, 00:17:41.699 { 00:17:41.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.699 "dma_device_type": 2 00:17:41.699 } 00:17:41.699 ], 00:17:41.699 "driver_specific": {} 00:17:41.699 }' 00:17:41.699 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.699 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.958 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.217 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.217 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.217 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.217 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.217 16:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.476 "name": "BaseBdev2", 00:17:42.476 "aliases": [ 00:17:42.476 "f293b4cc-485d-4a16-952e-7e7b215460ac" 00:17:42.476 ], 00:17:42.476 "product_name": "Malloc disk", 00:17:42.476 "block_size": 512, 00:17:42.476 "num_blocks": 65536, 00:17:42.476 "uuid": "f293b4cc-485d-4a16-952e-7e7b215460ac", 00:17:42.476 "assigned_rate_limits": { 00:17:42.476 "rw_ios_per_sec": 0, 00:17:42.476 "rw_mbytes_per_sec": 0, 00:17:42.476 "r_mbytes_per_sec": 0, 00:17:42.476 "w_mbytes_per_sec": 0 00:17:42.476 }, 00:17:42.476 "claimed": true, 00:17:42.476 "claim_type": "exclusive_write", 00:17:42.476 "zoned": false, 00:17:42.476 "supported_io_types": { 00:17:42.476 "read": true, 00:17:42.476 "write": true, 00:17:42.476 "unmap": true, 00:17:42.476 "flush": true, 00:17:42.476 "reset": true, 00:17:42.476 "nvme_admin": false, 00:17:42.476 "nvme_io": false, 00:17:42.476 "nvme_io_md": false, 00:17:42.476 "write_zeroes": true, 00:17:42.476 "zcopy": true, 00:17:42.476 "get_zone_info": false, 00:17:42.476 "zone_management": false, 00:17:42.476 "zone_append": false, 00:17:42.476 "compare": false, 00:17:42.476 "compare_and_write": false, 00:17:42.476 "abort": true, 00:17:42.476 "seek_hole": false, 00:17:42.476 "seek_data": false, 00:17:42.476 "copy": true, 00:17:42.476 "nvme_iov_md": false 00:17:42.476 }, 00:17:42.476 "memory_domains": [ 00:17:42.476 { 00:17:42.476 "dma_device_id": "system", 00:17:42.476 "dma_device_type": 1 00:17:42.476 }, 00:17:42.476 { 00:17:42.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.476 "dma_device_type": 2 00:17:42.476 } 00:17:42.476 ], 00:17:42.476 "driver_specific": {} 00:17:42.476 }' 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:42.476 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.735 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.735 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:42.735 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.735 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:42.735 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.994 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.995 "name": "BaseBdev3", 00:17:42.995 "aliases": [ 00:17:42.995 "19bd1e5d-fff1-4f44-8f97-3620853832df" 00:17:42.995 ], 00:17:42.995 "product_name": "Malloc disk", 00:17:42.995 "block_size": 512, 00:17:42.995 "num_blocks": 65536, 00:17:42.995 "uuid": "19bd1e5d-fff1-4f44-8f97-3620853832df", 00:17:42.995 "assigned_rate_limits": { 00:17:42.995 "rw_ios_per_sec": 0, 00:17:42.995 "rw_mbytes_per_sec": 0, 00:17:42.995 "r_mbytes_per_sec": 0, 00:17:42.995 "w_mbytes_per_sec": 0 00:17:42.995 }, 00:17:42.995 "claimed": true, 00:17:42.995 "claim_type": "exclusive_write", 00:17:42.995 "zoned": false, 00:17:42.995 "supported_io_types": { 00:17:42.995 "read": true, 00:17:42.995 "write": true, 00:17:42.995 "unmap": true, 00:17:42.995 "flush": true, 00:17:42.995 "reset": true, 00:17:42.995 "nvme_admin": false, 00:17:42.995 "nvme_io": false, 00:17:42.995 "nvme_io_md": false, 00:17:42.995 "write_zeroes": true, 00:17:42.995 "zcopy": true, 00:17:42.995 "get_zone_info": false, 00:17:42.995 "zone_management": false, 00:17:42.995 "zone_append": false, 00:17:42.995 "compare": false, 00:17:42.995 "compare_and_write": false, 00:17:42.995 "abort": true, 00:17:42.995 "seek_hole": false, 00:17:42.995 "seek_data": false, 00:17:42.995 "copy": true, 00:17:42.995 "nvme_iov_md": false 00:17:42.995 }, 00:17:42.995 "memory_domains": [ 00:17:42.995 { 00:17:42.995 "dma_device_id": "system", 00:17:42.995 "dma_device_type": 1 00:17:42.995 }, 00:17:42.995 { 00:17:42.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.995 "dma_device_type": 2 00:17:42.995 } 00:17:42.995 ], 00:17:42.995 "driver_specific": {} 00:17:42.995 }' 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.995 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:43.254 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:43.254 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.254 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:43.254 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:43.254 16:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:43.514 [2024-07-24 16:34:40.149608] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.514 [2024-07-24 16:34:40.149646] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.514 [2024-07-24 16:34:40.149730] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.514 [2024-07-24 16:34:40.149794] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.514 [2024-07-24 16:34:40.149815] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1630970 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1630970 ']' 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1630970 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630970 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630970' 00:17:43.514 killing process with pid 1630970 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1630970 00:17:43.514 [2024-07-24 16:34:40.221910] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.514 16:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1630970 00:17:43.773 [2024-07-24 16:34:40.553589] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.680 16:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:45.680 00:17:45.680 real 0m29.350s 00:17:45.680 user 0m51.377s 00:17:45.680 sys 0m5.021s 00:17:45.680 16:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:45.680 16:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.680 ************************************ 00:17:45.680 END TEST raid_state_function_test_sb 00:17:45.680 ************************************ 00:17:45.680 16:34:42 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:45.680 16:34:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:45.680 16:34:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:45.680 16:34:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.680 ************************************ 00:17:45.680 START TEST raid_superblock_test 00:17:45.680 ************************************ 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1636505 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1636505 /var/tmp/spdk-raid.sock 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1636505 ']' 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:45.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.680 16:34:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.680 [2024-07-24 16:34:42.490510] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:17:45.680 [2024-07-24 16:34:42.490632] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1636505 ] 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:45.940 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.940 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:45.941 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:45.941 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:45.941 [2024-07-24 16:34:42.717842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.200 [2024-07-24 16:34:42.997345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.459 [2024-07-24 16:34:43.313250] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.459 [2024-07-24 16:34:43.313283] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:46.717 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:46.976 malloc1 00:17:46.976 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.259 [2024-07-24 16:34:43.960248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.259 [2024-07-24 16:34:43.960311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.259 [2024-07-24 16:34:43.960341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:17:47.259 [2024-07-24 16:34:43.960358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.259 [2024-07-24 16:34:43.963110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.259 [2024-07-24 16:34:43.963155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.259 pt1 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.259 16:34:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:47.522 malloc2 00:17:47.522 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.782 [2024-07-24 16:34:44.447271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.782 [2024-07-24 16:34:44.447328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.782 [2024-07-24 16:34:44.447356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:17:47.782 [2024-07-24 16:34:44.447371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.782 [2024-07-24 16:34:44.450129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.782 [2024-07-24 16:34:44.450177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.782 pt2 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.782 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:48.041 malloc3 00:17:48.041 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:48.300 [2024-07-24 16:34:44.956914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:48.300 [2024-07-24 16:34:44.956976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.300 [2024-07-24 16:34:44.957007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:17:48.300 [2024-07-24 16:34:44.957023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.300 [2024-07-24 16:34:44.959782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.300 [2024-07-24 16:34:44.959814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:48.300 pt3 00:17:48.300 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:48.300 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:48.300 16:34:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:48.559 [2024-07-24 16:34:45.185584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.559 [2024-07-24 16:34:45.187954] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.559 [2024-07-24 16:34:45.188038] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:48.559 [2024-07-24 16:34:45.188267] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:17:48.559 [2024-07-24 16:34:45.188289] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:48.559 [2024-07-24 16:34:45.188668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:48.559 [2024-07-24 16:34:45.188921] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:17:48.559 [2024-07-24 16:34:45.188937] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:17:48.559 [2024-07-24 16:34:45.189164] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.559 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.818 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.818 "name": "raid_bdev1", 00:17:48.818 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:48.818 "strip_size_kb": 64, 00:17:48.818 "state": "online", 00:17:48.818 "raid_level": "raid0", 00:17:48.818 "superblock": true, 00:17:48.818 "num_base_bdevs": 3, 00:17:48.818 "num_base_bdevs_discovered": 3, 00:17:48.818 "num_base_bdevs_operational": 3, 00:17:48.818 "base_bdevs_list": [ 00:17:48.818 { 00:17:48.818 "name": "pt1", 00:17:48.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.818 "is_configured": true, 00:17:48.818 "data_offset": 2048, 00:17:48.818 "data_size": 63488 00:17:48.818 }, 00:17:48.818 { 00:17:48.818 "name": "pt2", 00:17:48.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.818 "is_configured": true, 00:17:48.818 "data_offset": 2048, 00:17:48.818 "data_size": 63488 00:17:48.818 }, 00:17:48.818 { 00:17:48.818 "name": "pt3", 00:17:48.818 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:48.818 "is_configured": true, 00:17:48.818 "data_offset": 2048, 00:17:48.818 "data_size": 63488 00:17:48.818 } 00:17:48.818 ] 00:17:48.818 }' 00:17:48.818 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.818 16:34:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.386 16:34:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:49.386 [2024-07-24 16:34:46.188600] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.386 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:49.386 "name": "raid_bdev1", 00:17:49.386 "aliases": [ 00:17:49.386 "0fd51a5b-6735-44bd-88a5-c3e870e9d72f" 00:17:49.386 ], 00:17:49.386 "product_name": "Raid Volume", 00:17:49.386 "block_size": 512, 00:17:49.386 "num_blocks": 190464, 00:17:49.386 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:49.386 "assigned_rate_limits": { 00:17:49.386 "rw_ios_per_sec": 0, 00:17:49.386 "rw_mbytes_per_sec": 0, 00:17:49.386 "r_mbytes_per_sec": 0, 00:17:49.386 "w_mbytes_per_sec": 0 00:17:49.386 }, 00:17:49.386 "claimed": false, 00:17:49.386 "zoned": false, 00:17:49.386 "supported_io_types": { 00:17:49.386 "read": true, 00:17:49.386 "write": true, 00:17:49.386 "unmap": true, 00:17:49.386 "flush": true, 00:17:49.386 "reset": true, 00:17:49.386 "nvme_admin": false, 00:17:49.386 "nvme_io": false, 00:17:49.386 "nvme_io_md": false, 00:17:49.386 "write_zeroes": true, 00:17:49.386 "zcopy": false, 00:17:49.386 "get_zone_info": false, 00:17:49.386 "zone_management": false, 00:17:49.386 "zone_append": false, 00:17:49.386 "compare": false, 00:17:49.386 "compare_and_write": false, 00:17:49.386 "abort": false, 00:17:49.386 "seek_hole": false, 00:17:49.386 "seek_data": false, 00:17:49.386 "copy": false, 00:17:49.386 "nvme_iov_md": false 00:17:49.386 }, 00:17:49.386 "memory_domains": [ 00:17:49.386 { 00:17:49.386 "dma_device_id": "system", 00:17:49.386 "dma_device_type": 1 00:17:49.386 }, 00:17:49.386 { 00:17:49.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.386 "dma_device_type": 2 00:17:49.386 }, 00:17:49.386 { 00:17:49.386 "dma_device_id": "system", 00:17:49.386 "dma_device_type": 1 00:17:49.386 }, 00:17:49.386 { 00:17:49.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.386 "dma_device_type": 2 00:17:49.386 }, 00:17:49.386 { 00:17:49.386 "dma_device_id": "system", 00:17:49.386 "dma_device_type": 1 00:17:49.386 }, 00:17:49.386 { 00:17:49.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.386 "dma_device_type": 2 00:17:49.386 } 00:17:49.386 ], 00:17:49.386 "driver_specific": { 00:17:49.386 "raid": { 00:17:49.386 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:49.386 "strip_size_kb": 64, 00:17:49.386 "state": "online", 00:17:49.386 "raid_level": "raid0", 00:17:49.386 "superblock": true, 00:17:49.386 "num_base_bdevs": 3, 00:17:49.386 "num_base_bdevs_discovered": 3, 00:17:49.386 "num_base_bdevs_operational": 3, 00:17:49.386 "base_bdevs_list": [ 00:17:49.386 { 00:17:49.386 "name": "pt1", 00:17:49.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.386 "is_configured": true, 00:17:49.386 "data_offset": 2048, 00:17:49.386 "data_size": 63488 00:17:49.386 }, 00:17:49.386 { 00:17:49.386 "name": "pt2", 00:17:49.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.386 "is_configured": true, 00:17:49.386 "data_offset": 2048, 00:17:49.387 "data_size": 63488 00:17:49.387 }, 00:17:49.387 { 00:17:49.387 "name": "pt3", 00:17:49.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.387 "is_configured": true, 00:17:49.387 "data_offset": 2048, 00:17:49.387 "data_size": 63488 00:17:49.387 } 00:17:49.387 ] 00:17:49.387 } 00:17:49.387 } 00:17:49.387 }' 00:17:49.387 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.645 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:49.646 pt2 00:17:49.646 pt3' 00:17:49.646 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:49.646 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:49.646 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.646 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.646 "name": "pt1", 00:17:49.646 "aliases": [ 00:17:49.646 "00000000-0000-0000-0000-000000000001" 00:17:49.646 ], 00:17:49.646 "product_name": "passthru", 00:17:49.646 "block_size": 512, 00:17:49.646 "num_blocks": 65536, 00:17:49.646 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.646 "assigned_rate_limits": { 00:17:49.646 "rw_ios_per_sec": 0, 00:17:49.646 "rw_mbytes_per_sec": 0, 00:17:49.646 "r_mbytes_per_sec": 0, 00:17:49.646 "w_mbytes_per_sec": 0 00:17:49.646 }, 00:17:49.646 "claimed": true, 00:17:49.646 "claim_type": "exclusive_write", 00:17:49.646 "zoned": false, 00:17:49.646 "supported_io_types": { 00:17:49.646 "read": true, 00:17:49.646 "write": true, 00:17:49.646 "unmap": true, 00:17:49.646 "flush": true, 00:17:49.646 "reset": true, 00:17:49.646 "nvme_admin": false, 00:17:49.646 "nvme_io": false, 00:17:49.646 "nvme_io_md": false, 00:17:49.646 "write_zeroes": true, 00:17:49.646 "zcopy": true, 00:17:49.646 "get_zone_info": false, 00:17:49.646 "zone_management": false, 00:17:49.646 "zone_append": false, 00:17:49.646 "compare": false, 00:17:49.646 "compare_and_write": false, 00:17:49.646 "abort": true, 00:17:49.646 "seek_hole": false, 00:17:49.646 "seek_data": false, 00:17:49.646 "copy": true, 00:17:49.646 "nvme_iov_md": false 00:17:49.646 }, 00:17:49.646 "memory_domains": [ 00:17:49.646 { 00:17:49.646 "dma_device_id": "system", 00:17:49.646 "dma_device_type": 1 00:17:49.646 }, 00:17:49.646 { 00:17:49.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.646 "dma_device_type": 2 00:17:49.646 } 00:17:49.646 ], 00:17:49.646 "driver_specific": { 00:17:49.646 "passthru": { 00:17:49.646 "name": "pt1", 00:17:49.646 "base_bdev_name": "malloc1" 00:17:49.646 } 00:17:49.646 } 00:17:49.646 }' 00:17:49.646 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.905 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.164 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.164 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:50.164 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:50.164 16:34:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:50.164 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:50.164 "name": "pt2", 00:17:50.164 "aliases": [ 00:17:50.164 "00000000-0000-0000-0000-000000000002" 00:17:50.164 ], 00:17:50.164 "product_name": "passthru", 00:17:50.164 "block_size": 512, 00:17:50.164 "num_blocks": 65536, 00:17:50.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.164 "assigned_rate_limits": { 00:17:50.164 "rw_ios_per_sec": 0, 00:17:50.164 "rw_mbytes_per_sec": 0, 00:17:50.164 "r_mbytes_per_sec": 0, 00:17:50.164 "w_mbytes_per_sec": 0 00:17:50.164 }, 00:17:50.164 "claimed": true, 00:17:50.164 "claim_type": "exclusive_write", 00:17:50.164 "zoned": false, 00:17:50.164 "supported_io_types": { 00:17:50.164 "read": true, 00:17:50.164 "write": true, 00:17:50.164 "unmap": true, 00:17:50.164 "flush": true, 00:17:50.164 "reset": true, 00:17:50.164 "nvme_admin": false, 00:17:50.164 "nvme_io": false, 00:17:50.164 "nvme_io_md": false, 00:17:50.164 "write_zeroes": true, 00:17:50.164 "zcopy": true, 00:17:50.164 "get_zone_info": false, 00:17:50.164 "zone_management": false, 00:17:50.164 "zone_append": false, 00:17:50.164 "compare": false, 00:17:50.164 "compare_and_write": false, 00:17:50.164 "abort": true, 00:17:50.164 "seek_hole": false, 00:17:50.164 "seek_data": false, 00:17:50.164 "copy": true, 00:17:50.164 "nvme_iov_md": false 00:17:50.164 }, 00:17:50.164 "memory_domains": [ 00:17:50.164 { 00:17:50.164 "dma_device_id": "system", 00:17:50.164 "dma_device_type": 1 00:17:50.164 }, 00:17:50.164 { 00:17:50.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.164 "dma_device_type": 2 00:17:50.164 } 00:17:50.164 ], 00:17:50.164 "driver_specific": { 00:17:50.164 "passthru": { 00:17:50.164 "name": "pt2", 00:17:50.164 "base_bdev_name": "malloc2" 00:17:50.164 } 00:17:50.164 } 00:17:50.164 }' 00:17:50.164 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.423 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:50.682 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:50.682 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:50.682 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:50.682 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:50.682 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:50.682 "name": "pt3", 00:17:50.682 "aliases": [ 00:17:50.682 "00000000-0000-0000-0000-000000000003" 00:17:50.682 ], 00:17:50.682 "product_name": "passthru", 00:17:50.682 "block_size": 512, 00:17:50.682 "num_blocks": 65536, 00:17:50.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:50.682 "assigned_rate_limits": { 00:17:50.682 "rw_ios_per_sec": 0, 00:17:50.682 "rw_mbytes_per_sec": 0, 00:17:50.682 "r_mbytes_per_sec": 0, 00:17:50.682 "w_mbytes_per_sec": 0 00:17:50.682 }, 00:17:50.682 "claimed": true, 00:17:50.682 "claim_type": "exclusive_write", 00:17:50.682 "zoned": false, 00:17:50.682 "supported_io_types": { 00:17:50.682 "read": true, 00:17:50.682 "write": true, 00:17:50.682 "unmap": true, 00:17:50.682 "flush": true, 00:17:50.682 "reset": true, 00:17:50.682 "nvme_admin": false, 00:17:50.682 "nvme_io": false, 00:17:50.682 "nvme_io_md": false, 00:17:50.682 "write_zeroes": true, 00:17:50.682 "zcopy": true, 00:17:50.682 "get_zone_info": false, 00:17:50.682 "zone_management": false, 00:17:50.682 "zone_append": false, 00:17:50.682 "compare": false, 00:17:50.682 "compare_and_write": false, 00:17:50.682 "abort": true, 00:17:50.682 "seek_hole": false, 00:17:50.682 "seek_data": false, 00:17:50.682 "copy": true, 00:17:50.682 "nvme_iov_md": false 00:17:50.682 }, 00:17:50.682 "memory_domains": [ 00:17:50.682 { 00:17:50.682 "dma_device_id": "system", 00:17:50.682 "dma_device_type": 1 00:17:50.682 }, 00:17:50.682 { 00:17:50.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.682 "dma_device_type": 2 00:17:50.682 } 00:17:50.682 ], 00:17:50.682 "driver_specific": { 00:17:50.682 "passthru": { 00:17:50.682 "name": "pt3", 00:17:50.682 "base_bdev_name": "malloc3" 00:17:50.682 } 00:17:50.682 } 00:17:50.682 }' 00:17:50.682 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:50.940 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:51.198 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:51.198 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:51.198 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:51.198 16:34:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:51.456 [2024-07-24 16:34:48.069685] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.456 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=0fd51a5b-6735-44bd-88a5-c3e870e9d72f 00:17:51.456 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 0fd51a5b-6735-44bd-88a5-c3e870e9d72f ']' 00:17:51.456 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:51.456 [2024-07-24 16:34:48.293904] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.456 [2024-07-24 16:34:48.293939] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.456 [2024-07-24 16:34:48.294027] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.456 [2024-07-24 16:34:48.294098] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.456 [2024-07-24 16:34:48.294115] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:17:51.456 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.456 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:51.714 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:51.714 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:51.714 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.714 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:51.972 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:51.972 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:52.230 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:52.230 16:34:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:52.488 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:52.488 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:52.746 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:53.005 [2024-07-24 16:34:49.649478] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:53.005 [2024-07-24 16:34:49.651814] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:53.005 [2024-07-24 16:34:49.651876] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:53.005 [2024-07-24 16:34:49.651934] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:53.005 [2024-07-24 16:34:49.651994] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:53.005 [2024-07-24 16:34:49.652023] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:53.005 [2024-07-24 16:34:49.652048] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.005 [2024-07-24 16:34:49.652062] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:17:53.005 request: 00:17:53.005 { 00:17:53.005 "name": "raid_bdev1", 00:17:53.005 "raid_level": "raid0", 00:17:53.005 "base_bdevs": [ 00:17:53.005 "malloc1", 00:17:53.005 "malloc2", 00:17:53.005 "malloc3" 00:17:53.005 ], 00:17:53.005 "strip_size_kb": 64, 00:17:53.005 "superblock": false, 00:17:53.005 "method": "bdev_raid_create", 00:17:53.005 "req_id": 1 00:17:53.006 } 00:17:53.006 Got JSON-RPC error response 00:17:53.006 response: 00:17:53.006 { 00:17:53.006 "code": -17, 00:17:53.006 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:53.006 } 00:17:53.006 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:53.006 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:53.006 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:53.006 16:34:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:53.006 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.006 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:53.264 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:53.264 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:53.264 16:34:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.264 [2024-07-24 16:34:50.094644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.264 [2024-07-24 16:34:50.094714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.264 [2024-07-24 16:34:50.094741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:17:53.264 [2024-07-24 16:34:50.094756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.264 [2024-07-24 16:34:50.097539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.264 [2024-07-24 16:34:50.097573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.264 [2024-07-24 16:34:50.097671] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:53.264 [2024-07-24 16:34:50.097753] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.264 pt1 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.264 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.522 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.522 "name": "raid_bdev1", 00:17:53.522 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:53.522 "strip_size_kb": 64, 00:17:53.522 "state": "configuring", 00:17:53.522 "raid_level": "raid0", 00:17:53.522 "superblock": true, 00:17:53.522 "num_base_bdevs": 3, 00:17:53.522 "num_base_bdevs_discovered": 1, 00:17:53.522 "num_base_bdevs_operational": 3, 00:17:53.522 "base_bdevs_list": [ 00:17:53.522 { 00:17:53.522 "name": "pt1", 00:17:53.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:53.522 "is_configured": true, 00:17:53.522 "data_offset": 2048, 00:17:53.522 "data_size": 63488 00:17:53.522 }, 00:17:53.522 { 00:17:53.522 "name": null, 00:17:53.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.523 "is_configured": false, 00:17:53.523 "data_offset": 2048, 00:17:53.523 "data_size": 63488 00:17:53.523 }, 00:17:53.523 { 00:17:53.523 "name": null, 00:17:53.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:53.523 "is_configured": false, 00:17:53.523 "data_offset": 2048, 00:17:53.523 "data_size": 63488 00:17:53.523 } 00:17:53.523 ] 00:17:53.523 }' 00:17:53.523 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.523 16:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.089 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:17:54.089 16:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.347 [2024-07-24 16:34:51.101411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.347 [2024-07-24 16:34:51.101477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.347 [2024-07-24 16:34:51.101507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:17:54.347 [2024-07-24 16:34:51.101523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.347 [2024-07-24 16:34:51.102102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.347 [2024-07-24 16:34:51.102128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.347 [2024-07-24 16:34:51.102227] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:54.347 [2024-07-24 16:34:51.102255] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.347 pt2 00:17:54.347 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:54.606 [2024-07-24 16:34:51.330074] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.606 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.864 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.864 "name": "raid_bdev1", 00:17:54.864 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:54.864 "strip_size_kb": 64, 00:17:54.864 "state": "configuring", 00:17:54.864 "raid_level": "raid0", 00:17:54.864 "superblock": true, 00:17:54.864 "num_base_bdevs": 3, 00:17:54.864 "num_base_bdevs_discovered": 1, 00:17:54.864 "num_base_bdevs_operational": 3, 00:17:54.864 "base_bdevs_list": [ 00:17:54.864 { 00:17:54.864 "name": "pt1", 00:17:54.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.864 "is_configured": true, 00:17:54.864 "data_offset": 2048, 00:17:54.864 "data_size": 63488 00:17:54.864 }, 00:17:54.865 { 00:17:54.865 "name": null, 00:17:54.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.865 "is_configured": false, 00:17:54.865 "data_offset": 2048, 00:17:54.865 "data_size": 63488 00:17:54.865 }, 00:17:54.865 { 00:17:54.865 "name": null, 00:17:54.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:54.865 "is_configured": false, 00:17:54.865 "data_offset": 2048, 00:17:54.865 "data_size": 63488 00:17:54.865 } 00:17:54.865 ] 00:17:54.865 }' 00:17:54.865 16:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.865 16:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.432 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:55.432 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:55.432 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:55.690 [2024-07-24 16:34:52.352849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:55.690 [2024-07-24 16:34:52.352913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.690 [2024-07-24 16:34:52.352936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:17:55.690 [2024-07-24 16:34:52.352954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.690 [2024-07-24 16:34:52.353520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.690 [2024-07-24 16:34:52.353549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:55.690 [2024-07-24 16:34:52.353642] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:55.690 [2024-07-24 16:34:52.353672] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.690 pt2 00:17:55.690 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:55.690 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:55.690 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:55.949 [2024-07-24 16:34:52.577433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:55.949 [2024-07-24 16:34:52.577496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.949 [2024-07-24 16:34:52.577520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:17:55.949 [2024-07-24 16:34:52.577538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.949 [2024-07-24 16:34:52.578131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.949 [2024-07-24 16:34:52.578171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:55.949 [2024-07-24 16:34:52.578264] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:55.949 [2024-07-24 16:34:52.578299] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:55.949 [2024-07-24 16:34:52.578480] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:17:55.949 [2024-07-24 16:34:52.578499] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.949 [2024-07-24 16:34:52.578791] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:17:55.949 [2024-07-24 16:34:52.579018] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:17:55.949 [2024-07-24 16:34:52.579032] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:17:55.949 [2024-07-24 16:34:52.579231] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.949 pt3 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.949 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.207 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.207 "name": "raid_bdev1", 00:17:56.207 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:56.207 "strip_size_kb": 64, 00:17:56.208 "state": "online", 00:17:56.208 "raid_level": "raid0", 00:17:56.208 "superblock": true, 00:17:56.208 "num_base_bdevs": 3, 00:17:56.208 "num_base_bdevs_discovered": 3, 00:17:56.208 "num_base_bdevs_operational": 3, 00:17:56.208 "base_bdevs_list": [ 00:17:56.208 { 00:17:56.208 "name": "pt1", 00:17:56.208 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.208 "is_configured": true, 00:17:56.208 "data_offset": 2048, 00:17:56.208 "data_size": 63488 00:17:56.208 }, 00:17:56.208 { 00:17:56.208 "name": "pt2", 00:17:56.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.208 "is_configured": true, 00:17:56.208 "data_offset": 2048, 00:17:56.208 "data_size": 63488 00:17:56.208 }, 00:17:56.208 { 00:17:56.208 "name": "pt3", 00:17:56.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:56.208 "is_configured": true, 00:17:56.208 "data_offset": 2048, 00:17:56.208 "data_size": 63488 00:17:56.208 } 00:17:56.208 ] 00:17:56.208 }' 00:17:56.208 16:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.208 16:34:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:56.775 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:57.041 [2024-07-24 16:34:53.857229] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.041 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:57.041 "name": "raid_bdev1", 00:17:57.041 "aliases": [ 00:17:57.041 "0fd51a5b-6735-44bd-88a5-c3e870e9d72f" 00:17:57.041 ], 00:17:57.041 "product_name": "Raid Volume", 00:17:57.041 "block_size": 512, 00:17:57.041 "num_blocks": 190464, 00:17:57.041 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:57.041 "assigned_rate_limits": { 00:17:57.041 "rw_ios_per_sec": 0, 00:17:57.041 "rw_mbytes_per_sec": 0, 00:17:57.041 "r_mbytes_per_sec": 0, 00:17:57.041 "w_mbytes_per_sec": 0 00:17:57.042 }, 00:17:57.042 "claimed": false, 00:17:57.042 "zoned": false, 00:17:57.042 "supported_io_types": { 00:17:57.042 "read": true, 00:17:57.042 "write": true, 00:17:57.042 "unmap": true, 00:17:57.042 "flush": true, 00:17:57.042 "reset": true, 00:17:57.042 "nvme_admin": false, 00:17:57.042 "nvme_io": false, 00:17:57.042 "nvme_io_md": false, 00:17:57.042 "write_zeroes": true, 00:17:57.042 "zcopy": false, 00:17:57.042 "get_zone_info": false, 00:17:57.042 "zone_management": false, 00:17:57.042 "zone_append": false, 00:17:57.042 "compare": false, 00:17:57.042 "compare_and_write": false, 00:17:57.042 "abort": false, 00:17:57.042 "seek_hole": false, 00:17:57.042 "seek_data": false, 00:17:57.042 "copy": false, 00:17:57.042 "nvme_iov_md": false 00:17:57.042 }, 00:17:57.042 "memory_domains": [ 00:17:57.042 { 00:17:57.042 "dma_device_id": "system", 00:17:57.042 "dma_device_type": 1 00:17:57.042 }, 00:17:57.042 { 00:17:57.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.042 "dma_device_type": 2 00:17:57.043 }, 00:17:57.043 { 00:17:57.043 "dma_device_id": "system", 00:17:57.043 "dma_device_type": 1 00:17:57.043 }, 00:17:57.043 { 00:17:57.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.043 "dma_device_type": 2 00:17:57.043 }, 00:17:57.043 { 00:17:57.043 "dma_device_id": "system", 00:17:57.043 "dma_device_type": 1 00:17:57.043 }, 00:17:57.043 { 00:17:57.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.043 "dma_device_type": 2 00:17:57.043 } 00:17:57.043 ], 00:17:57.043 "driver_specific": { 00:17:57.043 "raid": { 00:17:57.043 "uuid": "0fd51a5b-6735-44bd-88a5-c3e870e9d72f", 00:17:57.043 "strip_size_kb": 64, 00:17:57.043 "state": "online", 00:17:57.043 "raid_level": "raid0", 00:17:57.043 "superblock": true, 00:17:57.043 "num_base_bdevs": 3, 00:17:57.043 "num_base_bdevs_discovered": 3, 00:17:57.043 "num_base_bdevs_operational": 3, 00:17:57.043 "base_bdevs_list": [ 00:17:57.043 { 00:17:57.043 "name": "pt1", 00:17:57.043 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.043 "is_configured": true, 00:17:57.043 "data_offset": 2048, 00:17:57.043 "data_size": 63488 00:17:57.043 }, 00:17:57.043 { 00:17:57.043 "name": "pt2", 00:17:57.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.043 "is_configured": true, 00:17:57.044 "data_offset": 2048, 00:17:57.044 "data_size": 63488 00:17:57.044 }, 00:17:57.044 { 00:17:57.044 "name": "pt3", 00:17:57.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:57.044 "is_configured": true, 00:17:57.044 "data_offset": 2048, 00:17:57.044 "data_size": 63488 00:17:57.044 } 00:17:57.044 ] 00:17:57.044 } 00:17:57.044 } 00:17:57.044 }' 00:17:57.044 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:57.310 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:57.310 pt2 00:17:57.310 pt3' 00:17:57.310 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:57.310 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:57.310 16:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.568 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.568 "name": "pt1", 00:17:57.568 "aliases": [ 00:17:57.568 "00000000-0000-0000-0000-000000000001" 00:17:57.568 ], 00:17:57.568 "product_name": "passthru", 00:17:57.568 "block_size": 512, 00:17:57.568 "num_blocks": 65536, 00:17:57.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:57.568 "assigned_rate_limits": { 00:17:57.568 "rw_ios_per_sec": 0, 00:17:57.568 "rw_mbytes_per_sec": 0, 00:17:57.568 "r_mbytes_per_sec": 0, 00:17:57.568 "w_mbytes_per_sec": 0 00:17:57.568 }, 00:17:57.568 "claimed": true, 00:17:57.568 "claim_type": "exclusive_write", 00:17:57.568 "zoned": false, 00:17:57.568 "supported_io_types": { 00:17:57.568 "read": true, 00:17:57.568 "write": true, 00:17:57.568 "unmap": true, 00:17:57.568 "flush": true, 00:17:57.568 "reset": true, 00:17:57.568 "nvme_admin": false, 00:17:57.568 "nvme_io": false, 00:17:57.568 "nvme_io_md": false, 00:17:57.568 "write_zeroes": true, 00:17:57.568 "zcopy": true, 00:17:57.568 "get_zone_info": false, 00:17:57.568 "zone_management": false, 00:17:57.568 "zone_append": false, 00:17:57.568 "compare": false, 00:17:57.568 "compare_and_write": false, 00:17:57.568 "abort": true, 00:17:57.568 "seek_hole": false, 00:17:57.568 "seek_data": false, 00:17:57.568 "copy": true, 00:17:57.568 "nvme_iov_md": false 00:17:57.568 }, 00:17:57.568 "memory_domains": [ 00:17:57.568 { 00:17:57.568 "dma_device_id": "system", 00:17:57.568 "dma_device_type": 1 00:17:57.568 }, 00:17:57.568 { 00:17:57.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.568 "dma_device_type": 2 00:17:57.568 } 00:17:57.568 ], 00:17:57.568 "driver_specific": { 00:17:57.568 "passthru": { 00:17:57.568 "name": "pt1", 00:17:57.568 "base_bdev_name": "malloc1" 00:17:57.568 } 00:17:57.568 } 00:17:57.568 }' 00:17:57.568 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.856 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.114 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:58.114 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.115 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.115 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.115 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.115 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:58.115 16:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.373 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.373 "name": "pt2", 00:17:58.373 "aliases": [ 00:17:58.373 "00000000-0000-0000-0000-000000000002" 00:17:58.373 ], 00:17:58.373 "product_name": "passthru", 00:17:58.373 "block_size": 512, 00:17:58.373 "num_blocks": 65536, 00:17:58.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.373 "assigned_rate_limits": { 00:17:58.373 "rw_ios_per_sec": 0, 00:17:58.374 "rw_mbytes_per_sec": 0, 00:17:58.374 "r_mbytes_per_sec": 0, 00:17:58.374 "w_mbytes_per_sec": 0 00:17:58.374 }, 00:17:58.374 "claimed": true, 00:17:58.374 "claim_type": "exclusive_write", 00:17:58.374 "zoned": false, 00:17:58.374 "supported_io_types": { 00:17:58.374 "read": true, 00:17:58.374 "write": true, 00:17:58.374 "unmap": true, 00:17:58.374 "flush": true, 00:17:58.374 "reset": true, 00:17:58.374 "nvme_admin": false, 00:17:58.374 "nvme_io": false, 00:17:58.374 "nvme_io_md": false, 00:17:58.374 "write_zeroes": true, 00:17:58.374 "zcopy": true, 00:17:58.374 "get_zone_info": false, 00:17:58.374 "zone_management": false, 00:17:58.374 "zone_append": false, 00:17:58.374 "compare": false, 00:17:58.374 "compare_and_write": false, 00:17:58.374 "abort": true, 00:17:58.374 "seek_hole": false, 00:17:58.374 "seek_data": false, 00:17:58.374 "copy": true, 00:17:58.374 "nvme_iov_md": false 00:17:58.374 }, 00:17:58.374 "memory_domains": [ 00:17:58.374 { 00:17:58.374 "dma_device_id": "system", 00:17:58.374 "dma_device_type": 1 00:17:58.374 }, 00:17:58.374 { 00:17:58.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.374 "dma_device_type": 2 00:17:58.374 } 00:17:58.374 ], 00:17:58.374 "driver_specific": { 00:17:58.374 "passthru": { 00:17:58.374 "name": "pt2", 00:17:58.374 "base_bdev_name": "malloc2" 00:17:58.374 } 00:17:58.374 } 00:17:58.374 }' 00:17:58.374 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.374 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.374 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.374 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.374 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:58.633 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:59.201 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:59.201 "name": "pt3", 00:17:59.201 "aliases": [ 00:17:59.201 "00000000-0000-0000-0000-000000000003" 00:17:59.201 ], 00:17:59.201 "product_name": "passthru", 00:17:59.201 "block_size": 512, 00:17:59.201 "num_blocks": 65536, 00:17:59.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.201 "assigned_rate_limits": { 00:17:59.201 "rw_ios_per_sec": 0, 00:17:59.201 "rw_mbytes_per_sec": 0, 00:17:59.201 "r_mbytes_per_sec": 0, 00:17:59.201 "w_mbytes_per_sec": 0 00:17:59.201 }, 00:17:59.201 "claimed": true, 00:17:59.201 "claim_type": "exclusive_write", 00:17:59.201 "zoned": false, 00:17:59.201 "supported_io_types": { 00:17:59.201 "read": true, 00:17:59.201 "write": true, 00:17:59.201 "unmap": true, 00:17:59.201 "flush": true, 00:17:59.201 "reset": true, 00:17:59.201 "nvme_admin": false, 00:17:59.201 "nvme_io": false, 00:17:59.201 "nvme_io_md": false, 00:17:59.201 "write_zeroes": true, 00:17:59.201 "zcopy": true, 00:17:59.201 "get_zone_info": false, 00:17:59.201 "zone_management": false, 00:17:59.201 "zone_append": false, 00:17:59.201 "compare": false, 00:17:59.201 "compare_and_write": false, 00:17:59.201 "abort": true, 00:17:59.201 "seek_hole": false, 00:17:59.201 "seek_data": false, 00:17:59.201 "copy": true, 00:17:59.201 "nvme_iov_md": false 00:17:59.201 }, 00:17:59.201 "memory_domains": [ 00:17:59.201 { 00:17:59.202 "dma_device_id": "system", 00:17:59.202 "dma_device_type": 1 00:17:59.202 }, 00:17:59.202 { 00:17:59.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.202 "dma_device_type": 2 00:17:59.202 } 00:17:59.202 ], 00:17:59.202 "driver_specific": { 00:17:59.202 "passthru": { 00:17:59.202 "name": "pt3", 00:17:59.202 "base_bdev_name": "malloc3" 00:17:59.202 } 00:17:59.202 } 00:17:59.202 }' 00:17:59.202 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.202 16:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:59.202 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:59.202 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:59.461 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:00.092 [2024-07-24 16:34:56.741245] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 0fd51a5b-6735-44bd-88a5-c3e870e9d72f '!=' 0fd51a5b-6735-44bd-88a5-c3e870e9d72f ']' 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1636505 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1636505 ']' 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1636505 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1636505 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1636505' 00:18:00.092 killing process with pid 1636505 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1636505 00:18:00.092 [2024-07-24 16:34:56.824980] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:00.092 [2024-07-24 16:34:56.825089] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.092 16:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1636505 00:18:00.092 [2024-07-24 16:34:56.825171] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:00.092 [2024-07-24 16:34:56.825191] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:18:00.352 [2024-07-24 16:34:57.157067] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:02.258 16:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:18:02.258 00:18:02.258 real 0m16.416s 00:18:02.258 user 0m27.906s 00:18:02.258 sys 0m2.630s 00:18:02.258 16:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.258 16:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.258 ************************************ 00:18:02.258 END TEST raid_superblock_test 00:18:02.258 ************************************ 00:18:02.258 16:34:58 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:18:02.258 16:34:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:02.258 16:34:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.258 16:34:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.258 ************************************ 00:18:02.258 START TEST raid_read_error_test 00:18:02.258 ************************************ 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.m00vusrrYf 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1639524 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1639524 /var/tmp/spdk-raid.sock 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1639524 ']' 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:02.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.258 16:34:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.258 [2024-07-24 16:34:59.000664] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:18:02.258 [2024-07-24 16:34:59.000788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1639524 ] 00:18:02.518 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.518 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:02.519 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:02.519 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:02.519 [2024-07-24 16:34:59.228136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.778 [2024-07-24 16:34:59.507352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.038 [2024-07-24 16:34:59.860163] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.038 [2024-07-24 16:34:59.860201] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:03.297 16:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.297 16:35:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:03.297 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:03.297 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:03.556 BaseBdev1_malloc 00:18:03.556 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:03.815 true 00:18:03.815 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:03.815 [2024-07-24 16:35:00.630179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:03.815 [2024-07-24 16:35:00.630238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.815 [2024-07-24 16:35:00.630265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:03.815 [2024-07-24 16:35:00.630287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.815 [2024-07-24 16:35:00.633030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.815 [2024-07-24 16:35:00.633068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.815 BaseBdev1 00:18:03.815 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:03.815 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:04.073 BaseBdev2_malloc 00:18:04.073 16:35:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:04.332 true 00:18:04.332 16:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:04.591 [2024-07-24 16:35:01.265036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:04.591 [2024-07-24 16:35:01.265098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.591 [2024-07-24 16:35:01.265125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:04.591 [2024-07-24 16:35:01.265154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.591 [2024-07-24 16:35:01.267907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.591 [2024-07-24 16:35:01.267948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:04.591 BaseBdev2 00:18:04.591 16:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:04.591 16:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:04.850 BaseBdev3_malloc 00:18:04.850 16:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:04.850 true 00:18:04.850 16:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:05.109 [2024-07-24 16:35:01.843446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:05.109 [2024-07-24 16:35:01.843507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.109 [2024-07-24 16:35:01.843535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:05.109 [2024-07-24 16:35:01.843554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.109 [2024-07-24 16:35:01.846338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.109 [2024-07-24 16:35:01.846376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:05.109 BaseBdev3 00:18:05.109 16:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:05.368 [2024-07-24 16:35:02.060056] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.368 [2024-07-24 16:35:02.062362] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.368 [2024-07-24 16:35:02.062452] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.368 [2024-07-24 16:35:02.062719] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:18:05.368 [2024-07-24 16:35:02.062736] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:05.368 [2024-07-24 16:35:02.063056] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:05.368 [2024-07-24 16:35:02.063323] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:18:05.368 [2024-07-24 16:35:02.063349] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:18:05.368 [2024-07-24 16:35:02.063543] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.368 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.627 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.627 "name": "raid_bdev1", 00:18:05.627 "uuid": "b6caa191-46fc-4da6-a1d9-2399d43ce4e9", 00:18:05.627 "strip_size_kb": 64, 00:18:05.627 "state": "online", 00:18:05.627 "raid_level": "raid0", 00:18:05.627 "superblock": true, 00:18:05.627 "num_base_bdevs": 3, 00:18:05.627 "num_base_bdevs_discovered": 3, 00:18:05.627 "num_base_bdevs_operational": 3, 00:18:05.627 "base_bdevs_list": [ 00:18:05.627 { 00:18:05.627 "name": "BaseBdev1", 00:18:05.627 "uuid": "cec6f100-44b9-55a6-af4e-590626ace66b", 00:18:05.627 "is_configured": true, 00:18:05.627 "data_offset": 2048, 00:18:05.627 "data_size": 63488 00:18:05.627 }, 00:18:05.627 { 00:18:05.627 "name": "BaseBdev2", 00:18:05.627 "uuid": "c578c61b-1af4-5dc7-93d5-338d82dfeeac", 00:18:05.627 "is_configured": true, 00:18:05.627 "data_offset": 2048, 00:18:05.627 "data_size": 63488 00:18:05.627 }, 00:18:05.627 { 00:18:05.627 "name": "BaseBdev3", 00:18:05.627 "uuid": "72afc0a9-4d6d-570f-a96b-50f8efa6a502", 00:18:05.627 "is_configured": true, 00:18:05.627 "data_offset": 2048, 00:18:05.627 "data_size": 63488 00:18:05.627 } 00:18:05.627 ] 00:18:05.627 }' 00:18:05.627 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.627 16:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:06.195 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:06.196 16:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:06.196 [2024-07-24 16:35:02.912101] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:07.131 16:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.699 "name": "raid_bdev1", 00:18:07.699 "uuid": "b6caa191-46fc-4da6-a1d9-2399d43ce4e9", 00:18:07.699 "strip_size_kb": 64, 00:18:07.699 "state": "online", 00:18:07.699 "raid_level": "raid0", 00:18:07.699 "superblock": true, 00:18:07.699 "num_base_bdevs": 3, 00:18:07.699 "num_base_bdevs_discovered": 3, 00:18:07.699 "num_base_bdevs_operational": 3, 00:18:07.699 "base_bdevs_list": [ 00:18:07.699 { 00:18:07.699 "name": "BaseBdev1", 00:18:07.699 "uuid": "cec6f100-44b9-55a6-af4e-590626ace66b", 00:18:07.699 "is_configured": true, 00:18:07.699 "data_offset": 2048, 00:18:07.699 "data_size": 63488 00:18:07.699 }, 00:18:07.699 { 00:18:07.699 "name": "BaseBdev2", 00:18:07.699 "uuid": "c578c61b-1af4-5dc7-93d5-338d82dfeeac", 00:18:07.699 "is_configured": true, 00:18:07.699 "data_offset": 2048, 00:18:07.699 "data_size": 63488 00:18:07.699 }, 00:18:07.699 { 00:18:07.699 "name": "BaseBdev3", 00:18:07.699 "uuid": "72afc0a9-4d6d-570f-a96b-50f8efa6a502", 00:18:07.699 "is_configured": true, 00:18:07.699 "data_offset": 2048, 00:18:07.699 "data_size": 63488 00:18:07.699 } 00:18:07.699 ] 00:18:07.699 }' 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.699 16:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.636 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:08.895 [2024-07-24 16:35:05.615522] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.895 [2024-07-24 16:35:05.615567] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.895 [2024-07-24 16:35:05.618840] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.895 [2024-07-24 16:35:05.618888] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.895 [2024-07-24 16:35:05.618935] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.895 [2024-07-24 16:35:05.618953] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:18:08.895 0 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1639524 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1639524 ']' 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1639524 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1639524 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1639524' 00:18:08.895 killing process with pid 1639524 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1639524 00:18:08.895 [2024-07-24 16:35:05.695928] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:08.895 16:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1639524 00:18:09.154 [2024-07-24 16:35:05.923467] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.m00vusrrYf 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.37 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.37 != \0\.\0\0 ]] 00:18:11.061 00:18:11.061 real 0m8.866s 00:18:11.061 user 0m12.596s 00:18:11.061 sys 0m1.325s 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.061 16:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.061 ************************************ 00:18:11.061 END TEST raid_read_error_test 00:18:11.061 ************************************ 00:18:11.061 16:35:07 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:11.061 16:35:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:11.061 16:35:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.061 16:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:11.061 ************************************ 00:18:11.061 START TEST raid_write_error_test 00:18:11.061 ************************************ 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.f5bBtO3TlU 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1641198 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1641198 /var/tmp/spdk-raid.sock 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1641198 ']' 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:11.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.061 16:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.321 [2024-07-24 16:35:07.961830] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:18:11.321 [2024-07-24 16:35:07.961953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641198 ] 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:11.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.321 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:11.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:11.322 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:11.581 [2024-07-24 16:35:08.185892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.840 [2024-07-24 16:35:08.465862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.099 [2024-07-24 16:35:08.814224] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.100 [2024-07-24 16:35:08.814263] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:12.359 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.359 16:35:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:12.359 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:12.359 16:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:12.618 BaseBdev1_malloc 00:18:12.618 16:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:12.877 true 00:18:12.877 16:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:12.877 [2024-07-24 16:35:09.717334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:12.877 [2024-07-24 16:35:09.717395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.877 [2024-07-24 16:35:09.717421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:12.877 [2024-07-24 16:35:09.717443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.877 [2024-07-24 16:35:09.720180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.877 [2024-07-24 16:35:09.720220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.877 BaseBdev1 00:18:12.877 16:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:12.877 16:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:13.446 BaseBdev2_malloc 00:18:13.446 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:13.446 true 00:18:13.446 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:13.705 [2024-07-24 16:35:10.448854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:13.705 [2024-07-24 16:35:10.448920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.705 [2024-07-24 16:35:10.448949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:13.705 [2024-07-24 16:35:10.448971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.705 [2024-07-24 16:35:10.451774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.705 [2024-07-24 16:35:10.451814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:13.705 BaseBdev2 00:18:13.705 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:13.705 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:13.964 BaseBdev3_malloc 00:18:13.964 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:14.223 true 00:18:14.223 16:35:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:14.497 [2024-07-24 16:35:11.187268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:14.497 [2024-07-24 16:35:11.187331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.497 [2024-07-24 16:35:11.187360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:14.497 [2024-07-24 16:35:11.187378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.497 [2024-07-24 16:35:11.190187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.497 [2024-07-24 16:35:11.190223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:14.497 BaseBdev3 00:18:14.497 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:14.802 [2024-07-24 16:35:11.427971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.802 [2024-07-24 16:35:11.430373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.802 [2024-07-24 16:35:11.430466] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.802 [2024-07-24 16:35:11.430760] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:18:14.802 [2024-07-24 16:35:11.430778] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:14.802 [2024-07-24 16:35:11.431133] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:14.802 [2024-07-24 16:35:11.431404] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:18:14.802 [2024-07-24 16:35:11.431430] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:18:14.802 [2024-07-24 16:35:11.431663] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.802 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.062 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.062 "name": "raid_bdev1", 00:18:15.062 "uuid": "04c830f0-edcb-4caa-94fb-2bbeff79843f", 00:18:15.062 "strip_size_kb": 64, 00:18:15.062 "state": "online", 00:18:15.062 "raid_level": "raid0", 00:18:15.062 "superblock": true, 00:18:15.062 "num_base_bdevs": 3, 00:18:15.062 "num_base_bdevs_discovered": 3, 00:18:15.062 "num_base_bdevs_operational": 3, 00:18:15.062 "base_bdevs_list": [ 00:18:15.062 { 00:18:15.062 "name": "BaseBdev1", 00:18:15.062 "uuid": "2f20fad8-93e4-5f2a-8c11-acabaef2dbe8", 00:18:15.062 "is_configured": true, 00:18:15.062 "data_offset": 2048, 00:18:15.062 "data_size": 63488 00:18:15.062 }, 00:18:15.062 { 00:18:15.062 "name": "BaseBdev2", 00:18:15.062 "uuid": "57e9768a-d11e-5415-96b6-035d551e3d1a", 00:18:15.062 "is_configured": true, 00:18:15.062 "data_offset": 2048, 00:18:15.062 "data_size": 63488 00:18:15.062 }, 00:18:15.062 { 00:18:15.062 "name": "BaseBdev3", 00:18:15.062 "uuid": "9336db8d-4364-5177-86b4-63d3a29e57ab", 00:18:15.062 "is_configured": true, 00:18:15.062 "data_offset": 2048, 00:18:15.062 "data_size": 63488 00:18:15.062 } 00:18:15.062 ] 00:18:15.062 }' 00:18:15.062 16:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.062 16:35:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.631 16:35:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:15.631 16:35:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:15.631 [2024-07-24 16:35:12.320569] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.569 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.828 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.828 "name": "raid_bdev1", 00:18:16.828 "uuid": "04c830f0-edcb-4caa-94fb-2bbeff79843f", 00:18:16.828 "strip_size_kb": 64, 00:18:16.828 "state": "online", 00:18:16.828 "raid_level": "raid0", 00:18:16.828 "superblock": true, 00:18:16.828 "num_base_bdevs": 3, 00:18:16.828 "num_base_bdevs_discovered": 3, 00:18:16.828 "num_base_bdevs_operational": 3, 00:18:16.828 "base_bdevs_list": [ 00:18:16.828 { 00:18:16.828 "name": "BaseBdev1", 00:18:16.828 "uuid": "2f20fad8-93e4-5f2a-8c11-acabaef2dbe8", 00:18:16.828 "is_configured": true, 00:18:16.828 "data_offset": 2048, 00:18:16.828 "data_size": 63488 00:18:16.828 }, 00:18:16.828 { 00:18:16.828 "name": "BaseBdev2", 00:18:16.828 "uuid": "57e9768a-d11e-5415-96b6-035d551e3d1a", 00:18:16.828 "is_configured": true, 00:18:16.828 "data_offset": 2048, 00:18:16.828 "data_size": 63488 00:18:16.828 }, 00:18:16.828 { 00:18:16.828 "name": "BaseBdev3", 00:18:16.828 "uuid": "9336db8d-4364-5177-86b4-63d3a29e57ab", 00:18:16.828 "is_configured": true, 00:18:16.828 "data_offset": 2048, 00:18:16.828 "data_size": 63488 00:18:16.828 } 00:18:16.828 ] 00:18:16.828 }' 00:18:16.828 16:35:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.828 16:35:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.397 16:35:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:17.656 [2024-07-24 16:35:14.310314] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.656 [2024-07-24 16:35:14.310358] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.656 [2024-07-24 16:35:14.313683] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.656 [2024-07-24 16:35:14.313734] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.656 [2024-07-24 16:35:14.313783] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.656 [2024-07-24 16:35:14.313799] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:18:17.656 0 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1641198 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1641198 ']' 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1641198 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1641198 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1641198' 00:18:17.656 killing process with pid 1641198 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1641198 00:18:17.656 [2024-07-24 16:35:14.384411] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.656 16:35:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1641198 00:18:17.916 [2024-07-24 16:35:14.626356] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.f5bBtO3TlU 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.50 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.50 != \0\.\0\0 ]] 00:18:19.823 00:18:19.823 real 0m8.565s 00:18:19.823 user 0m12.017s 00:18:19.823 sys 0m1.314s 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.823 16:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.823 ************************************ 00:18:19.823 END TEST raid_write_error_test 00:18:19.823 ************************************ 00:18:19.823 16:35:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:18:19.823 16:35:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:19.823 16:35:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:19.823 16:35:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.823 16:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.823 ************************************ 00:18:19.823 START TEST raid_state_function_test 00:18:19.823 ************************************ 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1642623 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1642623' 00:18:19.823 Process raid pid: 1642623 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1642623 /var/tmp/spdk-raid.sock 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1642623 ']' 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.823 16:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.823 [2024-07-24 16:35:16.586035] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:18:19.823 [2024-07-24 16:35:16.586121] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.082 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:20.083 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.083 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:20.083 [2024-07-24 16:35:16.788963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.342 [2024-07-24 16:35:17.084849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.602 [2024-07-24 16:35:17.444483] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.602 [2024-07-24 16:35:17.444524] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.861 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.861 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:20.861 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:21.120 [2024-07-24 16:35:17.775359] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.120 [2024-07-24 16:35:17.775415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.120 [2024-07-24 16:35:17.775429] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.120 [2024-07-24 16:35:17.775446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.120 [2024-07-24 16:35:17.775457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.120 [2024-07-24 16:35:17.775474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.120 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.120 "name": "Existed_Raid", 00:18:21.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.120 "strip_size_kb": 64, 00:18:21.120 "state": "configuring", 00:18:21.120 "raid_level": "concat", 00:18:21.120 "superblock": false, 00:18:21.120 "num_base_bdevs": 3, 00:18:21.120 "num_base_bdevs_discovered": 0, 00:18:21.120 "num_base_bdevs_operational": 3, 00:18:21.120 "base_bdevs_list": [ 00:18:21.120 { 00:18:21.120 "name": "BaseBdev1", 00:18:21.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.120 "is_configured": false, 00:18:21.120 "data_offset": 0, 00:18:21.120 "data_size": 0 00:18:21.120 }, 00:18:21.120 { 00:18:21.120 "name": "BaseBdev2", 00:18:21.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.120 "is_configured": false, 00:18:21.120 "data_offset": 0, 00:18:21.120 "data_size": 0 00:18:21.120 }, 00:18:21.121 { 00:18:21.121 "name": "BaseBdev3", 00:18:21.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.121 "is_configured": false, 00:18:21.121 "data_offset": 0, 00:18:21.121 "data_size": 0 00:18:21.121 } 00:18:21.121 ] 00:18:21.121 }' 00:18:21.121 16:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.121 16:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.689 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:21.948 [2024-07-24 16:35:18.729803] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.948 [2024-07-24 16:35:18.729846] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:18:21.948 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:22.207 [2024-07-24 16:35:18.954477] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.207 [2024-07-24 16:35:18.954529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.207 [2024-07-24 16:35:18.954544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.207 [2024-07-24 16:35:18.954564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.207 [2024-07-24 16:35:18.954575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.207 [2024-07-24 16:35:18.954591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.207 16:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:22.465 [2024-07-24 16:35:19.240734] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.465 BaseBdev1 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:22.465 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.723 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.982 [ 00:18:22.982 { 00:18:22.982 "name": "BaseBdev1", 00:18:22.982 "aliases": [ 00:18:22.982 "f1795d8f-2084-45a6-bd9b-a26794e9adfa" 00:18:22.982 ], 00:18:22.982 "product_name": "Malloc disk", 00:18:22.982 "block_size": 512, 00:18:22.982 "num_blocks": 65536, 00:18:22.982 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:22.982 "assigned_rate_limits": { 00:18:22.982 "rw_ios_per_sec": 0, 00:18:22.982 "rw_mbytes_per_sec": 0, 00:18:22.982 "r_mbytes_per_sec": 0, 00:18:22.982 "w_mbytes_per_sec": 0 00:18:22.982 }, 00:18:22.982 "claimed": true, 00:18:22.982 "claim_type": "exclusive_write", 00:18:22.982 "zoned": false, 00:18:22.982 "supported_io_types": { 00:18:22.982 "read": true, 00:18:22.982 "write": true, 00:18:22.982 "unmap": true, 00:18:22.982 "flush": true, 00:18:22.982 "reset": true, 00:18:22.982 "nvme_admin": false, 00:18:22.982 "nvme_io": false, 00:18:22.982 "nvme_io_md": false, 00:18:22.982 "write_zeroes": true, 00:18:22.982 "zcopy": true, 00:18:22.982 "get_zone_info": false, 00:18:22.982 "zone_management": false, 00:18:22.982 "zone_append": false, 00:18:22.982 "compare": false, 00:18:22.982 "compare_and_write": false, 00:18:22.982 "abort": true, 00:18:22.982 "seek_hole": false, 00:18:22.982 "seek_data": false, 00:18:22.982 "copy": true, 00:18:22.982 "nvme_iov_md": false 00:18:22.982 }, 00:18:22.982 "memory_domains": [ 00:18:22.982 { 00:18:22.982 "dma_device_id": "system", 00:18:22.982 "dma_device_type": 1 00:18:22.982 }, 00:18:22.982 { 00:18:22.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.982 "dma_device_type": 2 00:18:22.982 } 00:18:22.982 ], 00:18:22.982 "driver_specific": {} 00:18:22.982 } 00:18:22.982 ] 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.982 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.240 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.240 "name": "Existed_Raid", 00:18:23.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.240 "strip_size_kb": 64, 00:18:23.240 "state": "configuring", 00:18:23.240 "raid_level": "concat", 00:18:23.240 "superblock": false, 00:18:23.240 "num_base_bdevs": 3, 00:18:23.240 "num_base_bdevs_discovered": 1, 00:18:23.240 "num_base_bdevs_operational": 3, 00:18:23.240 "base_bdevs_list": [ 00:18:23.240 { 00:18:23.240 "name": "BaseBdev1", 00:18:23.240 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:23.240 "is_configured": true, 00:18:23.240 "data_offset": 0, 00:18:23.240 "data_size": 65536 00:18:23.240 }, 00:18:23.240 { 00:18:23.240 "name": "BaseBdev2", 00:18:23.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.240 "is_configured": false, 00:18:23.240 "data_offset": 0, 00:18:23.240 "data_size": 0 00:18:23.240 }, 00:18:23.240 { 00:18:23.240 "name": "BaseBdev3", 00:18:23.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.240 "is_configured": false, 00:18:23.240 "data_offset": 0, 00:18:23.240 "data_size": 0 00:18:23.240 } 00:18:23.240 ] 00:18:23.241 }' 00:18:23.241 16:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.241 16:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.806 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:24.064 [2024-07-24 16:35:20.716739] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.064 [2024-07-24 16:35:20.716800] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:18:24.064 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:24.323 [2024-07-24 16:35:20.949482] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.323 [2024-07-24 16:35:20.951815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.323 [2024-07-24 16:35:20.951862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.323 [2024-07-24 16:35:20.951877] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.323 [2024-07-24 16:35:20.951893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.323 16:35:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.582 16:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.582 "name": "Existed_Raid", 00:18:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.582 "strip_size_kb": 64, 00:18:24.582 "state": "configuring", 00:18:24.582 "raid_level": "concat", 00:18:24.582 "superblock": false, 00:18:24.582 "num_base_bdevs": 3, 00:18:24.582 "num_base_bdevs_discovered": 1, 00:18:24.582 "num_base_bdevs_operational": 3, 00:18:24.582 "base_bdevs_list": [ 00:18:24.582 { 00:18:24.582 "name": "BaseBdev1", 00:18:24.582 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:24.582 "is_configured": true, 00:18:24.582 "data_offset": 0, 00:18:24.582 "data_size": 65536 00:18:24.582 }, 00:18:24.582 { 00:18:24.582 "name": "BaseBdev2", 00:18:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.582 "is_configured": false, 00:18:24.582 "data_offset": 0, 00:18:24.582 "data_size": 0 00:18:24.582 }, 00:18:24.582 { 00:18:24.582 "name": "BaseBdev3", 00:18:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.582 "is_configured": false, 00:18:24.582 "data_offset": 0, 00:18:24.582 "data_size": 0 00:18:24.582 } 00:18:24.582 ] 00:18:24.582 }' 00:18:24.582 16:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.582 16:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.148 16:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:25.407 [2024-07-24 16:35:22.042304] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.407 BaseBdev2 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.407 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:25.666 [ 00:18:25.666 { 00:18:25.666 "name": "BaseBdev2", 00:18:25.666 "aliases": [ 00:18:25.666 "667490f6-c6b8-4d40-a021-adce80a06ab0" 00:18:25.666 ], 00:18:25.666 "product_name": "Malloc disk", 00:18:25.666 "block_size": 512, 00:18:25.666 "num_blocks": 65536, 00:18:25.666 "uuid": "667490f6-c6b8-4d40-a021-adce80a06ab0", 00:18:25.666 "assigned_rate_limits": { 00:18:25.666 "rw_ios_per_sec": 0, 00:18:25.666 "rw_mbytes_per_sec": 0, 00:18:25.666 "r_mbytes_per_sec": 0, 00:18:25.666 "w_mbytes_per_sec": 0 00:18:25.666 }, 00:18:25.666 "claimed": true, 00:18:25.666 "claim_type": "exclusive_write", 00:18:25.666 "zoned": false, 00:18:25.666 "supported_io_types": { 00:18:25.666 "read": true, 00:18:25.666 "write": true, 00:18:25.666 "unmap": true, 00:18:25.666 "flush": true, 00:18:25.666 "reset": true, 00:18:25.666 "nvme_admin": false, 00:18:25.666 "nvme_io": false, 00:18:25.666 "nvme_io_md": false, 00:18:25.666 "write_zeroes": true, 00:18:25.666 "zcopy": true, 00:18:25.666 "get_zone_info": false, 00:18:25.666 "zone_management": false, 00:18:25.666 "zone_append": false, 00:18:25.666 "compare": false, 00:18:25.666 "compare_and_write": false, 00:18:25.666 "abort": true, 00:18:25.666 "seek_hole": false, 00:18:25.666 "seek_data": false, 00:18:25.666 "copy": true, 00:18:25.666 "nvme_iov_md": false 00:18:25.666 }, 00:18:25.666 "memory_domains": [ 00:18:25.666 { 00:18:25.666 "dma_device_id": "system", 00:18:25.666 "dma_device_type": 1 00:18:25.666 }, 00:18:25.666 { 00:18:25.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.666 "dma_device_type": 2 00:18:25.666 } 00:18:25.666 ], 00:18:25.666 "driver_specific": {} 00:18:25.666 } 00:18:25.666 ] 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.666 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.925 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.925 "name": "Existed_Raid", 00:18:25.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.925 "strip_size_kb": 64, 00:18:25.925 "state": "configuring", 00:18:25.925 "raid_level": "concat", 00:18:25.925 "superblock": false, 00:18:25.925 "num_base_bdevs": 3, 00:18:25.925 "num_base_bdevs_discovered": 2, 00:18:25.925 "num_base_bdevs_operational": 3, 00:18:25.925 "base_bdevs_list": [ 00:18:25.925 { 00:18:25.925 "name": "BaseBdev1", 00:18:25.925 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:25.925 "is_configured": true, 00:18:25.925 "data_offset": 0, 00:18:25.925 "data_size": 65536 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "name": "BaseBdev2", 00:18:25.925 "uuid": "667490f6-c6b8-4d40-a021-adce80a06ab0", 00:18:25.925 "is_configured": true, 00:18:25.925 "data_offset": 0, 00:18:25.925 "data_size": 65536 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "name": "BaseBdev3", 00:18:25.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.925 "is_configured": false, 00:18:25.925 "data_offset": 0, 00:18:25.925 "data_size": 0 00:18:25.925 } 00:18:25.925 ] 00:18:25.925 }' 00:18:25.925 16:35:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.925 16:35:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:26.491 [2024-07-24 16:35:23.300560] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.491 [2024-07-24 16:35:23.300609] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:18:26.491 [2024-07-24 16:35:23.300627] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:26.491 [2024-07-24 16:35:23.300964] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:26.491 [2024-07-24 16:35:23.301222] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:18:26.491 [2024-07-24 16:35:23.301239] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:18:26.491 [2024-07-24 16:35:23.301557] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.491 BaseBdev3 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:26.491 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.749 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:27.006 [ 00:18:27.006 { 00:18:27.006 "name": "BaseBdev3", 00:18:27.006 "aliases": [ 00:18:27.006 "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1" 00:18:27.006 ], 00:18:27.006 "product_name": "Malloc disk", 00:18:27.006 "block_size": 512, 00:18:27.006 "num_blocks": 65536, 00:18:27.006 "uuid": "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1", 00:18:27.006 "assigned_rate_limits": { 00:18:27.006 "rw_ios_per_sec": 0, 00:18:27.006 "rw_mbytes_per_sec": 0, 00:18:27.006 "r_mbytes_per_sec": 0, 00:18:27.006 "w_mbytes_per_sec": 0 00:18:27.006 }, 00:18:27.006 "claimed": true, 00:18:27.006 "claim_type": "exclusive_write", 00:18:27.006 "zoned": false, 00:18:27.007 "supported_io_types": { 00:18:27.007 "read": true, 00:18:27.007 "write": true, 00:18:27.007 "unmap": true, 00:18:27.007 "flush": true, 00:18:27.007 "reset": true, 00:18:27.007 "nvme_admin": false, 00:18:27.007 "nvme_io": false, 00:18:27.007 "nvme_io_md": false, 00:18:27.007 "write_zeroes": true, 00:18:27.007 "zcopy": true, 00:18:27.007 "get_zone_info": false, 00:18:27.007 "zone_management": false, 00:18:27.007 "zone_append": false, 00:18:27.007 "compare": false, 00:18:27.007 "compare_and_write": false, 00:18:27.007 "abort": true, 00:18:27.007 "seek_hole": false, 00:18:27.007 "seek_data": false, 00:18:27.007 "copy": true, 00:18:27.007 "nvme_iov_md": false 00:18:27.007 }, 00:18:27.007 "memory_domains": [ 00:18:27.007 { 00:18:27.007 "dma_device_id": "system", 00:18:27.007 "dma_device_type": 1 00:18:27.007 }, 00:18:27.007 { 00:18:27.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.007 "dma_device_type": 2 00:18:27.007 } 00:18:27.007 ], 00:18:27.007 "driver_specific": {} 00:18:27.007 } 00:18:27.007 ] 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.007 "name": "Existed_Raid", 00:18:27.007 "uuid": "b0d333a6-3e99-430d-93f2-76062e10d458", 00:18:27.007 "strip_size_kb": 64, 00:18:27.007 "state": "online", 00:18:27.007 "raid_level": "concat", 00:18:27.007 "superblock": false, 00:18:27.007 "num_base_bdevs": 3, 00:18:27.007 "num_base_bdevs_discovered": 3, 00:18:27.007 "num_base_bdevs_operational": 3, 00:18:27.007 "base_bdevs_list": [ 00:18:27.007 { 00:18:27.007 "name": "BaseBdev1", 00:18:27.007 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:27.007 "is_configured": true, 00:18:27.007 "data_offset": 0, 00:18:27.007 "data_size": 65536 00:18:27.007 }, 00:18:27.007 { 00:18:27.007 "name": "BaseBdev2", 00:18:27.007 "uuid": "667490f6-c6b8-4d40-a021-adce80a06ab0", 00:18:27.007 "is_configured": true, 00:18:27.007 "data_offset": 0, 00:18:27.007 "data_size": 65536 00:18:27.007 }, 00:18:27.007 { 00:18:27.007 "name": "BaseBdev3", 00:18:27.007 "uuid": "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1", 00:18:27.007 "is_configured": true, 00:18:27.007 "data_offset": 0, 00:18:27.007 "data_size": 65536 00:18:27.007 } 00:18:27.007 ] 00:18:27.007 }' 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.007 16:35:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:27.573 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:27.832 [2024-07-24 16:35:24.468146] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.832 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:27.832 "name": "Existed_Raid", 00:18:27.832 "aliases": [ 00:18:27.832 "b0d333a6-3e99-430d-93f2-76062e10d458" 00:18:27.832 ], 00:18:27.832 "product_name": "Raid Volume", 00:18:27.832 "block_size": 512, 00:18:27.832 "num_blocks": 196608, 00:18:27.832 "uuid": "b0d333a6-3e99-430d-93f2-76062e10d458", 00:18:27.832 "assigned_rate_limits": { 00:18:27.832 "rw_ios_per_sec": 0, 00:18:27.832 "rw_mbytes_per_sec": 0, 00:18:27.832 "r_mbytes_per_sec": 0, 00:18:27.832 "w_mbytes_per_sec": 0 00:18:27.832 }, 00:18:27.832 "claimed": false, 00:18:27.832 "zoned": false, 00:18:27.832 "supported_io_types": { 00:18:27.832 "read": true, 00:18:27.832 "write": true, 00:18:27.832 "unmap": true, 00:18:27.832 "flush": true, 00:18:27.832 "reset": true, 00:18:27.832 "nvme_admin": false, 00:18:27.832 "nvme_io": false, 00:18:27.832 "nvme_io_md": false, 00:18:27.832 "write_zeroes": true, 00:18:27.832 "zcopy": false, 00:18:27.832 "get_zone_info": false, 00:18:27.832 "zone_management": false, 00:18:27.832 "zone_append": false, 00:18:27.832 "compare": false, 00:18:27.832 "compare_and_write": false, 00:18:27.832 "abort": false, 00:18:27.832 "seek_hole": false, 00:18:27.832 "seek_data": false, 00:18:27.832 "copy": false, 00:18:27.832 "nvme_iov_md": false 00:18:27.832 }, 00:18:27.832 "memory_domains": [ 00:18:27.832 { 00:18:27.832 "dma_device_id": "system", 00:18:27.832 "dma_device_type": 1 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.832 "dma_device_type": 2 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "dma_device_id": "system", 00:18:27.832 "dma_device_type": 1 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.832 "dma_device_type": 2 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "dma_device_id": "system", 00:18:27.832 "dma_device_type": 1 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.832 "dma_device_type": 2 00:18:27.832 } 00:18:27.832 ], 00:18:27.832 "driver_specific": { 00:18:27.832 "raid": { 00:18:27.832 "uuid": "b0d333a6-3e99-430d-93f2-76062e10d458", 00:18:27.832 "strip_size_kb": 64, 00:18:27.832 "state": "online", 00:18:27.832 "raid_level": "concat", 00:18:27.832 "superblock": false, 00:18:27.832 "num_base_bdevs": 3, 00:18:27.832 "num_base_bdevs_discovered": 3, 00:18:27.832 "num_base_bdevs_operational": 3, 00:18:27.832 "base_bdevs_list": [ 00:18:27.832 { 00:18:27.832 "name": "BaseBdev1", 00:18:27.832 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:27.832 "is_configured": true, 00:18:27.832 "data_offset": 0, 00:18:27.832 "data_size": 65536 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "name": "BaseBdev2", 00:18:27.832 "uuid": "667490f6-c6b8-4d40-a021-adce80a06ab0", 00:18:27.832 "is_configured": true, 00:18:27.832 "data_offset": 0, 00:18:27.832 "data_size": 65536 00:18:27.832 }, 00:18:27.832 { 00:18:27.832 "name": "BaseBdev3", 00:18:27.832 "uuid": "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1", 00:18:27.832 "is_configured": true, 00:18:27.832 "data_offset": 0, 00:18:27.832 "data_size": 65536 00:18:27.832 } 00:18:27.832 ] 00:18:27.832 } 00:18:27.832 } 00:18:27.832 }' 00:18:27.832 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.832 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:27.832 BaseBdev2 00:18:27.832 BaseBdev3' 00:18:27.832 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:27.833 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:27.833 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.091 "name": "BaseBdev1", 00:18:28.091 "aliases": [ 00:18:28.091 "f1795d8f-2084-45a6-bd9b-a26794e9adfa" 00:18:28.091 ], 00:18:28.091 "product_name": "Malloc disk", 00:18:28.091 "block_size": 512, 00:18:28.091 "num_blocks": 65536, 00:18:28.091 "uuid": "f1795d8f-2084-45a6-bd9b-a26794e9adfa", 00:18:28.091 "assigned_rate_limits": { 00:18:28.091 "rw_ios_per_sec": 0, 00:18:28.091 "rw_mbytes_per_sec": 0, 00:18:28.091 "r_mbytes_per_sec": 0, 00:18:28.091 "w_mbytes_per_sec": 0 00:18:28.091 }, 00:18:28.091 "claimed": true, 00:18:28.091 "claim_type": "exclusive_write", 00:18:28.091 "zoned": false, 00:18:28.091 "supported_io_types": { 00:18:28.091 "read": true, 00:18:28.091 "write": true, 00:18:28.091 "unmap": true, 00:18:28.091 "flush": true, 00:18:28.091 "reset": true, 00:18:28.091 "nvme_admin": false, 00:18:28.091 "nvme_io": false, 00:18:28.091 "nvme_io_md": false, 00:18:28.091 "write_zeroes": true, 00:18:28.091 "zcopy": true, 00:18:28.091 "get_zone_info": false, 00:18:28.091 "zone_management": false, 00:18:28.091 "zone_append": false, 00:18:28.091 "compare": false, 00:18:28.091 "compare_and_write": false, 00:18:28.091 "abort": true, 00:18:28.091 "seek_hole": false, 00:18:28.091 "seek_data": false, 00:18:28.091 "copy": true, 00:18:28.091 "nvme_iov_md": false 00:18:28.091 }, 00:18:28.091 "memory_domains": [ 00:18:28.091 { 00:18:28.091 "dma_device_id": "system", 00:18:28.091 "dma_device_type": 1 00:18:28.091 }, 00:18:28.091 { 00:18:28.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.091 "dma_device_type": 2 00:18:28.091 } 00:18:28.091 ], 00:18:28.091 "driver_specific": {} 00:18:28.091 }' 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.091 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.386 16:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:28.386 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.645 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:28.645 "name": "BaseBdev2", 00:18:28.645 "aliases": [ 00:18:28.645 "667490f6-c6b8-4d40-a021-adce80a06ab0" 00:18:28.645 ], 00:18:28.645 "product_name": "Malloc disk", 00:18:28.645 "block_size": 512, 00:18:28.645 "num_blocks": 65536, 00:18:28.645 "uuid": "667490f6-c6b8-4d40-a021-adce80a06ab0", 00:18:28.645 "assigned_rate_limits": { 00:18:28.645 "rw_ios_per_sec": 0, 00:18:28.645 "rw_mbytes_per_sec": 0, 00:18:28.645 "r_mbytes_per_sec": 0, 00:18:28.645 "w_mbytes_per_sec": 0 00:18:28.645 }, 00:18:28.645 "claimed": true, 00:18:28.645 "claim_type": "exclusive_write", 00:18:28.645 "zoned": false, 00:18:28.645 "supported_io_types": { 00:18:28.645 "read": true, 00:18:28.645 "write": true, 00:18:28.645 "unmap": true, 00:18:28.645 "flush": true, 00:18:28.645 "reset": true, 00:18:28.645 "nvme_admin": false, 00:18:28.645 "nvme_io": false, 00:18:28.645 "nvme_io_md": false, 00:18:28.645 "write_zeroes": true, 00:18:28.645 "zcopy": true, 00:18:28.645 "get_zone_info": false, 00:18:28.645 "zone_management": false, 00:18:28.645 "zone_append": false, 00:18:28.645 "compare": false, 00:18:28.645 "compare_and_write": false, 00:18:28.645 "abort": true, 00:18:28.645 "seek_hole": false, 00:18:28.645 "seek_data": false, 00:18:28.645 "copy": true, 00:18:28.645 "nvme_iov_md": false 00:18:28.645 }, 00:18:28.645 "memory_domains": [ 00:18:28.645 { 00:18:28.645 "dma_device_id": "system", 00:18:28.645 "dma_device_type": 1 00:18:28.645 }, 00:18:28.645 { 00:18:28.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.645 "dma_device_type": 2 00:18:28.645 } 00:18:28.645 ], 00:18:28.645 "driver_specific": {} 00:18:28.645 }' 00:18:28.645 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.645 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:28.645 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:28.645 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.645 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:28.904 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:29.163 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:29.163 "name": "BaseBdev3", 00:18:29.163 "aliases": [ 00:18:29.163 "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1" 00:18:29.163 ], 00:18:29.163 "product_name": "Malloc disk", 00:18:29.163 "block_size": 512, 00:18:29.163 "num_blocks": 65536, 00:18:29.163 "uuid": "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1", 00:18:29.163 "assigned_rate_limits": { 00:18:29.163 "rw_ios_per_sec": 0, 00:18:29.163 "rw_mbytes_per_sec": 0, 00:18:29.163 "r_mbytes_per_sec": 0, 00:18:29.163 "w_mbytes_per_sec": 0 00:18:29.163 }, 00:18:29.163 "claimed": true, 00:18:29.163 "claim_type": "exclusive_write", 00:18:29.163 "zoned": false, 00:18:29.163 "supported_io_types": { 00:18:29.163 "read": true, 00:18:29.163 "write": true, 00:18:29.163 "unmap": true, 00:18:29.163 "flush": true, 00:18:29.163 "reset": true, 00:18:29.163 "nvme_admin": false, 00:18:29.163 "nvme_io": false, 00:18:29.163 "nvme_io_md": false, 00:18:29.163 "write_zeroes": true, 00:18:29.163 "zcopy": true, 00:18:29.163 "get_zone_info": false, 00:18:29.163 "zone_management": false, 00:18:29.163 "zone_append": false, 00:18:29.163 "compare": false, 00:18:29.164 "compare_and_write": false, 00:18:29.164 "abort": true, 00:18:29.164 "seek_hole": false, 00:18:29.164 "seek_data": false, 00:18:29.164 "copy": true, 00:18:29.164 "nvme_iov_md": false 00:18:29.164 }, 00:18:29.164 "memory_domains": [ 00:18:29.164 { 00:18:29.164 "dma_device_id": "system", 00:18:29.164 "dma_device_type": 1 00:18:29.164 }, 00:18:29.164 { 00:18:29.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.164 "dma_device_type": 2 00:18:29.164 } 00:18:29.164 ], 00:18:29.164 "driver_specific": {} 00:18:29.164 }' 00:18:29.164 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.164 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:29.164 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:29.164 16:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:29.422 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:29.680 [2024-07-24 16:35:26.437213] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.680 [2024-07-24 16:35:26.437246] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.680 [2024-07-24 16:35:26.437311] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.680 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.937 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.937 "name": "Existed_Raid", 00:18:29.937 "uuid": "b0d333a6-3e99-430d-93f2-76062e10d458", 00:18:29.937 "strip_size_kb": 64, 00:18:29.937 "state": "offline", 00:18:29.937 "raid_level": "concat", 00:18:29.937 "superblock": false, 00:18:29.937 "num_base_bdevs": 3, 00:18:29.937 "num_base_bdevs_discovered": 2, 00:18:29.937 "num_base_bdevs_operational": 2, 00:18:29.937 "base_bdevs_list": [ 00:18:29.937 { 00:18:29.937 "name": null, 00:18:29.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.937 "is_configured": false, 00:18:29.937 "data_offset": 0, 00:18:29.937 "data_size": 65536 00:18:29.937 }, 00:18:29.937 { 00:18:29.937 "name": "BaseBdev2", 00:18:29.937 "uuid": "667490f6-c6b8-4d40-a021-adce80a06ab0", 00:18:29.937 "is_configured": true, 00:18:29.937 "data_offset": 0, 00:18:29.937 "data_size": 65536 00:18:29.937 }, 00:18:29.937 { 00:18:29.937 "name": "BaseBdev3", 00:18:29.937 "uuid": "9a5b83b7-4f6b-4541-ab4f-5ec0cd55d9b1", 00:18:29.937 "is_configured": true, 00:18:29.937 "data_offset": 0, 00:18:29.937 "data_size": 65536 00:18:29.937 } 00:18:29.937 ] 00:18:29.937 }' 00:18:29.937 16:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.937 16:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.502 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:30.502 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:30.502 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:30.502 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.760 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:30.760 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.760 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:31.018 [2024-07-24 16:35:27.747242] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.276 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:31.276 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:31.276 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:31.276 16:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.276 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:31.276 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.276 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:31.534 [2024-07-24 16:35:28.330309] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:31.534 [2024-07-24 16:35:28.330374] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:18:31.792 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:31.792 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:31.792 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.792 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:32.049 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:32.049 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:32.049 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:32.049 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:32.049 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:32.049 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:32.307 BaseBdev2 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:32.307 16:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.565 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.565 [ 00:18:32.565 { 00:18:32.565 "name": "BaseBdev2", 00:18:32.565 "aliases": [ 00:18:32.565 "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef" 00:18:32.565 ], 00:18:32.565 "product_name": "Malloc disk", 00:18:32.565 "block_size": 512, 00:18:32.565 "num_blocks": 65536, 00:18:32.565 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:32.565 "assigned_rate_limits": { 00:18:32.565 "rw_ios_per_sec": 0, 00:18:32.565 "rw_mbytes_per_sec": 0, 00:18:32.565 "r_mbytes_per_sec": 0, 00:18:32.565 "w_mbytes_per_sec": 0 00:18:32.565 }, 00:18:32.565 "claimed": false, 00:18:32.565 "zoned": false, 00:18:32.565 "supported_io_types": { 00:18:32.565 "read": true, 00:18:32.565 "write": true, 00:18:32.565 "unmap": true, 00:18:32.565 "flush": true, 00:18:32.565 "reset": true, 00:18:32.565 "nvme_admin": false, 00:18:32.565 "nvme_io": false, 00:18:32.565 "nvme_io_md": false, 00:18:32.565 "write_zeroes": true, 00:18:32.565 "zcopy": true, 00:18:32.565 "get_zone_info": false, 00:18:32.565 "zone_management": false, 00:18:32.565 "zone_append": false, 00:18:32.565 "compare": false, 00:18:32.565 "compare_and_write": false, 00:18:32.565 "abort": true, 00:18:32.565 "seek_hole": false, 00:18:32.565 "seek_data": false, 00:18:32.565 "copy": true, 00:18:32.565 "nvme_iov_md": false 00:18:32.565 }, 00:18:32.565 "memory_domains": [ 00:18:32.565 { 00:18:32.565 "dma_device_id": "system", 00:18:32.565 "dma_device_type": 1 00:18:32.565 }, 00:18:32.565 { 00:18:32.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.565 "dma_device_type": 2 00:18:32.565 } 00:18:32.565 ], 00:18:32.565 "driver_specific": {} 00:18:32.565 } 00:18:32.565 ] 00:18:32.823 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:32.823 16:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:32.823 16:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:32.823 16:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:33.080 BaseBdev3 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.080 16:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:33.338 [ 00:18:33.338 { 00:18:33.338 "name": "BaseBdev3", 00:18:33.338 "aliases": [ 00:18:33.338 "8d6a96ba-545c-4423-a4c7-652b137a6cc3" 00:18:33.338 ], 00:18:33.338 "product_name": "Malloc disk", 00:18:33.338 "block_size": 512, 00:18:33.338 "num_blocks": 65536, 00:18:33.338 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:33.338 "assigned_rate_limits": { 00:18:33.338 "rw_ios_per_sec": 0, 00:18:33.338 "rw_mbytes_per_sec": 0, 00:18:33.338 "r_mbytes_per_sec": 0, 00:18:33.338 "w_mbytes_per_sec": 0 00:18:33.338 }, 00:18:33.338 "claimed": false, 00:18:33.338 "zoned": false, 00:18:33.338 "supported_io_types": { 00:18:33.338 "read": true, 00:18:33.338 "write": true, 00:18:33.338 "unmap": true, 00:18:33.338 "flush": true, 00:18:33.338 "reset": true, 00:18:33.338 "nvme_admin": false, 00:18:33.338 "nvme_io": false, 00:18:33.338 "nvme_io_md": false, 00:18:33.338 "write_zeroes": true, 00:18:33.338 "zcopy": true, 00:18:33.338 "get_zone_info": false, 00:18:33.338 "zone_management": false, 00:18:33.338 "zone_append": false, 00:18:33.338 "compare": false, 00:18:33.338 "compare_and_write": false, 00:18:33.338 "abort": true, 00:18:33.338 "seek_hole": false, 00:18:33.338 "seek_data": false, 00:18:33.338 "copy": true, 00:18:33.338 "nvme_iov_md": false 00:18:33.338 }, 00:18:33.338 "memory_domains": [ 00:18:33.338 { 00:18:33.338 "dma_device_id": "system", 00:18:33.338 "dma_device_type": 1 00:18:33.338 }, 00:18:33.338 { 00:18:33.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.338 "dma_device_type": 2 00:18:33.338 } 00:18:33.338 ], 00:18:33.338 "driver_specific": {} 00:18:33.338 } 00:18:33.338 ] 00:18:33.338 16:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:33.338 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:33.338 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:33.338 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:33.596 [2024-07-24 16:35:30.376094] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:33.596 [2024-07-24 16:35:30.376152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:33.596 [2024-07-24 16:35:30.376187] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.596 [2024-07-24 16:35:30.378513] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.596 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.854 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.854 "name": "Existed_Raid", 00:18:33.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.854 "strip_size_kb": 64, 00:18:33.854 "state": "configuring", 00:18:33.854 "raid_level": "concat", 00:18:33.854 "superblock": false, 00:18:33.854 "num_base_bdevs": 3, 00:18:33.854 "num_base_bdevs_discovered": 2, 00:18:33.854 "num_base_bdevs_operational": 3, 00:18:33.854 "base_bdevs_list": [ 00:18:33.854 { 00:18:33.854 "name": "BaseBdev1", 00:18:33.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.854 "is_configured": false, 00:18:33.854 "data_offset": 0, 00:18:33.854 "data_size": 0 00:18:33.854 }, 00:18:33.854 { 00:18:33.854 "name": "BaseBdev2", 00:18:33.854 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:33.854 "is_configured": true, 00:18:33.854 "data_offset": 0, 00:18:33.854 "data_size": 65536 00:18:33.854 }, 00:18:33.854 { 00:18:33.854 "name": "BaseBdev3", 00:18:33.854 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:33.854 "is_configured": true, 00:18:33.854 "data_offset": 0, 00:18:33.854 "data_size": 65536 00:18:33.854 } 00:18:33.854 ] 00:18:33.854 }' 00:18:33.854 16:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.854 16:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.420 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:34.677 [2024-07-24 16:35:31.390830] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.677 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.935 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.935 "name": "Existed_Raid", 00:18:34.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.935 "strip_size_kb": 64, 00:18:34.935 "state": "configuring", 00:18:34.935 "raid_level": "concat", 00:18:34.935 "superblock": false, 00:18:34.935 "num_base_bdevs": 3, 00:18:34.935 "num_base_bdevs_discovered": 1, 00:18:34.935 "num_base_bdevs_operational": 3, 00:18:34.935 "base_bdevs_list": [ 00:18:34.935 { 00:18:34.935 "name": "BaseBdev1", 00:18:34.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.935 "is_configured": false, 00:18:34.935 "data_offset": 0, 00:18:34.935 "data_size": 0 00:18:34.935 }, 00:18:34.935 { 00:18:34.935 "name": null, 00:18:34.935 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:34.935 "is_configured": false, 00:18:34.935 "data_offset": 0, 00:18:34.935 "data_size": 65536 00:18:34.935 }, 00:18:34.935 { 00:18:34.935 "name": "BaseBdev3", 00:18:34.935 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:34.935 "is_configured": true, 00:18:34.935 "data_offset": 0, 00:18:34.935 "data_size": 65536 00:18:34.935 } 00:18:34.935 ] 00:18:34.935 }' 00:18:34.935 16:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.935 16:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.502 16:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.502 16:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:35.761 16:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:35.761 16:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:36.021 [2024-07-24 16:35:32.641538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.021 BaseBdev1 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.021 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.280 16:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:36.280 [ 00:18:36.280 { 00:18:36.280 "name": "BaseBdev1", 00:18:36.280 "aliases": [ 00:18:36.280 "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30" 00:18:36.280 ], 00:18:36.280 "product_name": "Malloc disk", 00:18:36.280 "block_size": 512, 00:18:36.280 "num_blocks": 65536, 00:18:36.280 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:36.280 "assigned_rate_limits": { 00:18:36.280 "rw_ios_per_sec": 0, 00:18:36.280 "rw_mbytes_per_sec": 0, 00:18:36.280 "r_mbytes_per_sec": 0, 00:18:36.280 "w_mbytes_per_sec": 0 00:18:36.280 }, 00:18:36.280 "claimed": true, 00:18:36.280 "claim_type": "exclusive_write", 00:18:36.280 "zoned": false, 00:18:36.280 "supported_io_types": { 00:18:36.280 "read": true, 00:18:36.280 "write": true, 00:18:36.280 "unmap": true, 00:18:36.280 "flush": true, 00:18:36.280 "reset": true, 00:18:36.280 "nvme_admin": false, 00:18:36.280 "nvme_io": false, 00:18:36.280 "nvme_io_md": false, 00:18:36.280 "write_zeroes": true, 00:18:36.280 "zcopy": true, 00:18:36.280 "get_zone_info": false, 00:18:36.280 "zone_management": false, 00:18:36.280 "zone_append": false, 00:18:36.280 "compare": false, 00:18:36.280 "compare_and_write": false, 00:18:36.280 "abort": true, 00:18:36.280 "seek_hole": false, 00:18:36.280 "seek_data": false, 00:18:36.280 "copy": true, 00:18:36.280 "nvme_iov_md": false 00:18:36.280 }, 00:18:36.280 "memory_domains": [ 00:18:36.280 { 00:18:36.280 "dma_device_id": "system", 00:18:36.280 "dma_device_type": 1 00:18:36.280 }, 00:18:36.280 { 00:18:36.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.280 "dma_device_type": 2 00:18:36.280 } 00:18:36.280 ], 00:18:36.280 "driver_specific": {} 00:18:36.280 } 00:18:36.280 ] 00:18:36.280 16:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:36.280 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:36.280 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:36.280 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:36.280 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.280 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.281 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.540 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.540 "name": "Existed_Raid", 00:18:36.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.540 "strip_size_kb": 64, 00:18:36.540 "state": "configuring", 00:18:36.540 "raid_level": "concat", 00:18:36.540 "superblock": false, 00:18:36.540 "num_base_bdevs": 3, 00:18:36.540 "num_base_bdevs_discovered": 2, 00:18:36.540 "num_base_bdevs_operational": 3, 00:18:36.540 "base_bdevs_list": [ 00:18:36.540 { 00:18:36.540 "name": "BaseBdev1", 00:18:36.540 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:36.540 "is_configured": true, 00:18:36.540 "data_offset": 0, 00:18:36.540 "data_size": 65536 00:18:36.540 }, 00:18:36.540 { 00:18:36.540 "name": null, 00:18:36.540 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:36.540 "is_configured": false, 00:18:36.540 "data_offset": 0, 00:18:36.540 "data_size": 65536 00:18:36.540 }, 00:18:36.540 { 00:18:36.540 "name": "BaseBdev3", 00:18:36.540 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:36.540 "is_configured": true, 00:18:36.540 "data_offset": 0, 00:18:36.540 "data_size": 65536 00:18:36.540 } 00:18:36.540 ] 00:18:36.540 }' 00:18:36.540 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.540 16:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.106 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.106 16:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:37.365 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:37.365 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:37.624 [2024-07-24 16:35:34.302133] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.624 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.883 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.883 "name": "Existed_Raid", 00:18:37.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.883 "strip_size_kb": 64, 00:18:37.883 "state": "configuring", 00:18:37.883 "raid_level": "concat", 00:18:37.883 "superblock": false, 00:18:37.883 "num_base_bdevs": 3, 00:18:37.883 "num_base_bdevs_discovered": 1, 00:18:37.883 "num_base_bdevs_operational": 3, 00:18:37.883 "base_bdevs_list": [ 00:18:37.883 { 00:18:37.883 "name": "BaseBdev1", 00:18:37.883 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:37.883 "is_configured": true, 00:18:37.883 "data_offset": 0, 00:18:37.883 "data_size": 65536 00:18:37.883 }, 00:18:37.883 { 00:18:37.883 "name": null, 00:18:37.883 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:37.883 "is_configured": false, 00:18:37.883 "data_offset": 0, 00:18:37.883 "data_size": 65536 00:18:37.884 }, 00:18:37.884 { 00:18:37.884 "name": null, 00:18:37.884 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:37.884 "is_configured": false, 00:18:37.884 "data_offset": 0, 00:18:37.884 "data_size": 65536 00:18:37.884 } 00:18:37.884 ] 00:18:37.884 }' 00:18:37.884 16:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.884 16:35:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.452 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.452 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:38.711 [2024-07-24 16:35:35.549519] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.711 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.970 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.970 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.970 "name": "Existed_Raid", 00:18:38.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.970 "strip_size_kb": 64, 00:18:38.970 "state": "configuring", 00:18:38.970 "raid_level": "concat", 00:18:38.970 "superblock": false, 00:18:38.970 "num_base_bdevs": 3, 00:18:38.970 "num_base_bdevs_discovered": 2, 00:18:38.970 "num_base_bdevs_operational": 3, 00:18:38.970 "base_bdevs_list": [ 00:18:38.970 { 00:18:38.970 "name": "BaseBdev1", 00:18:38.970 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:38.970 "is_configured": true, 00:18:38.970 "data_offset": 0, 00:18:38.970 "data_size": 65536 00:18:38.970 }, 00:18:38.970 { 00:18:38.970 "name": null, 00:18:38.970 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:38.970 "is_configured": false, 00:18:38.970 "data_offset": 0, 00:18:38.970 "data_size": 65536 00:18:38.970 }, 00:18:38.970 { 00:18:38.970 "name": "BaseBdev3", 00:18:38.970 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:38.970 "is_configured": true, 00:18:38.970 "data_offset": 0, 00:18:38.970 "data_size": 65536 00:18:38.970 } 00:18:38.970 ] 00:18:38.970 }' 00:18:38.970 16:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.970 16:35:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.539 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.539 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:39.798 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:39.798 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:40.058 [2024-07-24 16:35:36.817037] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.317 16:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.577 16:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.577 "name": "Existed_Raid", 00:18:40.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.577 "strip_size_kb": 64, 00:18:40.577 "state": "configuring", 00:18:40.577 "raid_level": "concat", 00:18:40.577 "superblock": false, 00:18:40.577 "num_base_bdevs": 3, 00:18:40.577 "num_base_bdevs_discovered": 1, 00:18:40.577 "num_base_bdevs_operational": 3, 00:18:40.577 "base_bdevs_list": [ 00:18:40.577 { 00:18:40.577 "name": null, 00:18:40.577 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:40.577 "is_configured": false, 00:18:40.577 "data_offset": 0, 00:18:40.577 "data_size": 65536 00:18:40.577 }, 00:18:40.577 { 00:18:40.577 "name": null, 00:18:40.577 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:40.577 "is_configured": false, 00:18:40.577 "data_offset": 0, 00:18:40.577 "data_size": 65536 00:18:40.577 }, 00:18:40.577 { 00:18:40.577 "name": "BaseBdev3", 00:18:40.577 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:40.577 "is_configured": true, 00:18:40.577 "data_offset": 0, 00:18:40.577 "data_size": 65536 00:18:40.577 } 00:18:40.577 ] 00:18:40.577 }' 00:18:40.577 16:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.577 16:35:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.208 16:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.208 16:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:41.208 16:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:41.208 16:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:41.471 [2024-07-24 16:35:38.207137] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.471 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.730 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.730 "name": "Existed_Raid", 00:18:41.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.730 "strip_size_kb": 64, 00:18:41.730 "state": "configuring", 00:18:41.730 "raid_level": "concat", 00:18:41.730 "superblock": false, 00:18:41.730 "num_base_bdevs": 3, 00:18:41.730 "num_base_bdevs_discovered": 2, 00:18:41.730 "num_base_bdevs_operational": 3, 00:18:41.730 "base_bdevs_list": [ 00:18:41.730 { 00:18:41.731 "name": null, 00:18:41.731 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:41.731 "is_configured": false, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 65536 00:18:41.731 }, 00:18:41.731 { 00:18:41.731 "name": "BaseBdev2", 00:18:41.731 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:41.731 "is_configured": true, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 65536 00:18:41.731 }, 00:18:41.731 { 00:18:41.731 "name": "BaseBdev3", 00:18:41.731 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:41.731 "is_configured": true, 00:18:41.731 "data_offset": 0, 00:18:41.731 "data_size": 65536 00:18:41.731 } 00:18:41.731 ] 00:18:41.731 }' 00:18:41.731 16:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.731 16:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.298 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.298 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:42.557 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:42.557 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.557 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:42.816 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30 00:18:43.075 [2024-07-24 16:35:39.722540] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:43.075 [2024-07-24 16:35:39.722590] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:18:43.075 [2024-07-24 16:35:39.722605] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:43.075 [2024-07-24 16:35:39.722911] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:18:43.075 [2024-07-24 16:35:39.723119] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:18:43.075 [2024-07-24 16:35:39.723133] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:18:43.075 [2024-07-24 16:35:39.723458] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.075 NewBaseBdev 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.075 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:43.333 16:35:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:43.333 [ 00:18:43.333 { 00:18:43.333 "name": "NewBaseBdev", 00:18:43.333 "aliases": [ 00:18:43.333 "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30" 00:18:43.333 ], 00:18:43.333 "product_name": "Malloc disk", 00:18:43.333 "block_size": 512, 00:18:43.333 "num_blocks": 65536, 00:18:43.333 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:43.333 "assigned_rate_limits": { 00:18:43.333 "rw_ios_per_sec": 0, 00:18:43.333 "rw_mbytes_per_sec": 0, 00:18:43.333 "r_mbytes_per_sec": 0, 00:18:43.333 "w_mbytes_per_sec": 0 00:18:43.333 }, 00:18:43.333 "claimed": true, 00:18:43.333 "claim_type": "exclusive_write", 00:18:43.333 "zoned": false, 00:18:43.333 "supported_io_types": { 00:18:43.333 "read": true, 00:18:43.333 "write": true, 00:18:43.333 "unmap": true, 00:18:43.333 "flush": true, 00:18:43.333 "reset": true, 00:18:43.333 "nvme_admin": false, 00:18:43.333 "nvme_io": false, 00:18:43.333 "nvme_io_md": false, 00:18:43.333 "write_zeroes": true, 00:18:43.333 "zcopy": true, 00:18:43.333 "get_zone_info": false, 00:18:43.333 "zone_management": false, 00:18:43.333 "zone_append": false, 00:18:43.333 "compare": false, 00:18:43.333 "compare_and_write": false, 00:18:43.333 "abort": true, 00:18:43.333 "seek_hole": false, 00:18:43.333 "seek_data": false, 00:18:43.333 "copy": true, 00:18:43.333 "nvme_iov_md": false 00:18:43.333 }, 00:18:43.333 "memory_domains": [ 00:18:43.333 { 00:18:43.333 "dma_device_id": "system", 00:18:43.333 "dma_device_type": 1 00:18:43.333 }, 00:18:43.333 { 00:18:43.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:43.333 "dma_device_type": 2 00:18:43.333 } 00:18:43.333 ], 00:18:43.333 "driver_specific": {} 00:18:43.333 } 00:18:43.333 ] 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.591 "name": "Existed_Raid", 00:18:43.591 "uuid": "e3332aaa-586f-46b8-9c2a-7397f4a12a01", 00:18:43.591 "strip_size_kb": 64, 00:18:43.591 "state": "online", 00:18:43.591 "raid_level": "concat", 00:18:43.591 "superblock": false, 00:18:43.591 "num_base_bdevs": 3, 00:18:43.591 "num_base_bdevs_discovered": 3, 00:18:43.591 "num_base_bdevs_operational": 3, 00:18:43.591 "base_bdevs_list": [ 00:18:43.591 { 00:18:43.591 "name": "NewBaseBdev", 00:18:43.591 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:43.591 "is_configured": true, 00:18:43.591 "data_offset": 0, 00:18:43.591 "data_size": 65536 00:18:43.591 }, 00:18:43.591 { 00:18:43.591 "name": "BaseBdev2", 00:18:43.591 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:43.591 "is_configured": true, 00:18:43.591 "data_offset": 0, 00:18:43.591 "data_size": 65536 00:18:43.591 }, 00:18:43.591 { 00:18:43.591 "name": "BaseBdev3", 00:18:43.591 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:43.591 "is_configured": true, 00:18:43.591 "data_offset": 0, 00:18:43.591 "data_size": 65536 00:18:43.591 } 00:18:43.591 ] 00:18:43.591 }' 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.591 16:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.158 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:44.158 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:44.158 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:44.158 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:44.158 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:44.158 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:44.416 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:44.416 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:44.416 [2024-07-24 16:35:41.227055] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.416 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:44.416 "name": "Existed_Raid", 00:18:44.416 "aliases": [ 00:18:44.416 "e3332aaa-586f-46b8-9c2a-7397f4a12a01" 00:18:44.416 ], 00:18:44.416 "product_name": "Raid Volume", 00:18:44.416 "block_size": 512, 00:18:44.416 "num_blocks": 196608, 00:18:44.416 "uuid": "e3332aaa-586f-46b8-9c2a-7397f4a12a01", 00:18:44.416 "assigned_rate_limits": { 00:18:44.416 "rw_ios_per_sec": 0, 00:18:44.416 "rw_mbytes_per_sec": 0, 00:18:44.416 "r_mbytes_per_sec": 0, 00:18:44.416 "w_mbytes_per_sec": 0 00:18:44.416 }, 00:18:44.416 "claimed": false, 00:18:44.416 "zoned": false, 00:18:44.416 "supported_io_types": { 00:18:44.416 "read": true, 00:18:44.416 "write": true, 00:18:44.416 "unmap": true, 00:18:44.416 "flush": true, 00:18:44.416 "reset": true, 00:18:44.416 "nvme_admin": false, 00:18:44.416 "nvme_io": false, 00:18:44.416 "nvme_io_md": false, 00:18:44.416 "write_zeroes": true, 00:18:44.416 "zcopy": false, 00:18:44.416 "get_zone_info": false, 00:18:44.416 "zone_management": false, 00:18:44.416 "zone_append": false, 00:18:44.416 "compare": false, 00:18:44.416 "compare_and_write": false, 00:18:44.416 "abort": false, 00:18:44.416 "seek_hole": false, 00:18:44.416 "seek_data": false, 00:18:44.416 "copy": false, 00:18:44.416 "nvme_iov_md": false 00:18:44.416 }, 00:18:44.416 "memory_domains": [ 00:18:44.416 { 00:18:44.416 "dma_device_id": "system", 00:18:44.416 "dma_device_type": 1 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.416 "dma_device_type": 2 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "dma_device_id": "system", 00:18:44.416 "dma_device_type": 1 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.416 "dma_device_type": 2 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "dma_device_id": "system", 00:18:44.416 "dma_device_type": 1 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.416 "dma_device_type": 2 00:18:44.416 } 00:18:44.416 ], 00:18:44.416 "driver_specific": { 00:18:44.416 "raid": { 00:18:44.416 "uuid": "e3332aaa-586f-46b8-9c2a-7397f4a12a01", 00:18:44.416 "strip_size_kb": 64, 00:18:44.416 "state": "online", 00:18:44.416 "raid_level": "concat", 00:18:44.416 "superblock": false, 00:18:44.416 "num_base_bdevs": 3, 00:18:44.416 "num_base_bdevs_discovered": 3, 00:18:44.416 "num_base_bdevs_operational": 3, 00:18:44.416 "base_bdevs_list": [ 00:18:44.416 { 00:18:44.416 "name": "NewBaseBdev", 00:18:44.416 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:44.416 "is_configured": true, 00:18:44.416 "data_offset": 0, 00:18:44.416 "data_size": 65536 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "name": "BaseBdev2", 00:18:44.416 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:44.416 "is_configured": true, 00:18:44.416 "data_offset": 0, 00:18:44.416 "data_size": 65536 00:18:44.416 }, 00:18:44.416 { 00:18:44.416 "name": "BaseBdev3", 00:18:44.416 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:44.416 "is_configured": true, 00:18:44.416 "data_offset": 0, 00:18:44.417 "data_size": 65536 00:18:44.417 } 00:18:44.417 ] 00:18:44.417 } 00:18:44.417 } 00:18:44.417 }' 00:18:44.417 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:44.675 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:44.675 BaseBdev2 00:18:44.675 BaseBdev3' 00:18:44.675 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:44.675 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:44.675 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:44.675 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:44.675 "name": "NewBaseBdev", 00:18:44.675 "aliases": [ 00:18:44.675 "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30" 00:18:44.675 ], 00:18:44.675 "product_name": "Malloc disk", 00:18:44.675 "block_size": 512, 00:18:44.675 "num_blocks": 65536, 00:18:44.675 "uuid": "e5cfb4f2-93ba-4dc2-9ea5-b0eede403c30", 00:18:44.675 "assigned_rate_limits": { 00:18:44.675 "rw_ios_per_sec": 0, 00:18:44.675 "rw_mbytes_per_sec": 0, 00:18:44.675 "r_mbytes_per_sec": 0, 00:18:44.675 "w_mbytes_per_sec": 0 00:18:44.675 }, 00:18:44.675 "claimed": true, 00:18:44.675 "claim_type": "exclusive_write", 00:18:44.675 "zoned": false, 00:18:44.675 "supported_io_types": { 00:18:44.675 "read": true, 00:18:44.675 "write": true, 00:18:44.675 "unmap": true, 00:18:44.675 "flush": true, 00:18:44.675 "reset": true, 00:18:44.675 "nvme_admin": false, 00:18:44.675 "nvme_io": false, 00:18:44.675 "nvme_io_md": false, 00:18:44.675 "write_zeroes": true, 00:18:44.675 "zcopy": true, 00:18:44.675 "get_zone_info": false, 00:18:44.675 "zone_management": false, 00:18:44.675 "zone_append": false, 00:18:44.675 "compare": false, 00:18:44.675 "compare_and_write": false, 00:18:44.675 "abort": true, 00:18:44.675 "seek_hole": false, 00:18:44.675 "seek_data": false, 00:18:44.675 "copy": true, 00:18:44.675 "nvme_iov_md": false 00:18:44.675 }, 00:18:44.675 "memory_domains": [ 00:18:44.675 { 00:18:44.675 "dma_device_id": "system", 00:18:44.675 "dma_device_type": 1 00:18:44.675 }, 00:18:44.675 { 00:18:44.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.675 "dma_device_type": 2 00:18:44.675 } 00:18:44.675 ], 00:18:44.675 "driver_specific": {} 00:18:44.675 }' 00:18:44.675 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:44.933 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.191 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.191 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.191 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.191 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.191 16:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.450 "name": "BaseBdev2", 00:18:45.450 "aliases": [ 00:18:45.450 "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef" 00:18:45.450 ], 00:18:45.450 "product_name": "Malloc disk", 00:18:45.450 "block_size": 512, 00:18:45.450 "num_blocks": 65536, 00:18:45.450 "uuid": "adaf7fb1-2eaa-4103-a2ee-b74df45bd4ef", 00:18:45.450 "assigned_rate_limits": { 00:18:45.450 "rw_ios_per_sec": 0, 00:18:45.450 "rw_mbytes_per_sec": 0, 00:18:45.450 "r_mbytes_per_sec": 0, 00:18:45.450 "w_mbytes_per_sec": 0 00:18:45.450 }, 00:18:45.450 "claimed": true, 00:18:45.450 "claim_type": "exclusive_write", 00:18:45.450 "zoned": false, 00:18:45.450 "supported_io_types": { 00:18:45.450 "read": true, 00:18:45.450 "write": true, 00:18:45.450 "unmap": true, 00:18:45.450 "flush": true, 00:18:45.450 "reset": true, 00:18:45.450 "nvme_admin": false, 00:18:45.450 "nvme_io": false, 00:18:45.450 "nvme_io_md": false, 00:18:45.450 "write_zeroes": true, 00:18:45.450 "zcopy": true, 00:18:45.450 "get_zone_info": false, 00:18:45.450 "zone_management": false, 00:18:45.450 "zone_append": false, 00:18:45.450 "compare": false, 00:18:45.450 "compare_and_write": false, 00:18:45.450 "abort": true, 00:18:45.450 "seek_hole": false, 00:18:45.450 "seek_data": false, 00:18:45.450 "copy": true, 00:18:45.450 "nvme_iov_md": false 00:18:45.450 }, 00:18:45.450 "memory_domains": [ 00:18:45.450 { 00:18:45.450 "dma_device_id": "system", 00:18:45.450 "dma_device_type": 1 00:18:45.450 }, 00:18:45.450 { 00:18:45.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.450 "dma_device_type": 2 00:18:45.450 } 00:18:45.450 ], 00:18:45.450 "driver_specific": {} 00:18:45.450 }' 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.450 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:45.710 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.969 "name": "BaseBdev3", 00:18:45.969 "aliases": [ 00:18:45.969 "8d6a96ba-545c-4423-a4c7-652b137a6cc3" 00:18:45.969 ], 00:18:45.969 "product_name": "Malloc disk", 00:18:45.969 "block_size": 512, 00:18:45.969 "num_blocks": 65536, 00:18:45.969 "uuid": "8d6a96ba-545c-4423-a4c7-652b137a6cc3", 00:18:45.969 "assigned_rate_limits": { 00:18:45.969 "rw_ios_per_sec": 0, 00:18:45.969 "rw_mbytes_per_sec": 0, 00:18:45.969 "r_mbytes_per_sec": 0, 00:18:45.969 "w_mbytes_per_sec": 0 00:18:45.969 }, 00:18:45.969 "claimed": true, 00:18:45.969 "claim_type": "exclusive_write", 00:18:45.969 "zoned": false, 00:18:45.969 "supported_io_types": { 00:18:45.969 "read": true, 00:18:45.969 "write": true, 00:18:45.969 "unmap": true, 00:18:45.969 "flush": true, 00:18:45.969 "reset": true, 00:18:45.969 "nvme_admin": false, 00:18:45.969 "nvme_io": false, 00:18:45.969 "nvme_io_md": false, 00:18:45.969 "write_zeroes": true, 00:18:45.969 "zcopy": true, 00:18:45.969 "get_zone_info": false, 00:18:45.969 "zone_management": false, 00:18:45.969 "zone_append": false, 00:18:45.969 "compare": false, 00:18:45.969 "compare_and_write": false, 00:18:45.969 "abort": true, 00:18:45.969 "seek_hole": false, 00:18:45.969 "seek_data": false, 00:18:45.969 "copy": true, 00:18:45.969 "nvme_iov_md": false 00:18:45.969 }, 00:18:45.969 "memory_domains": [ 00:18:45.969 { 00:18:45.969 "dma_device_id": "system", 00:18:45.969 "dma_device_type": 1 00:18:45.969 }, 00:18:45.969 { 00:18:45.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.969 "dma_device_type": 2 00:18:45.969 } 00:18:45.969 ], 00:18:45.969 "driver_specific": {} 00:18:45.969 }' 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.969 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.228 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.228 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.228 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.228 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.228 16:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.228 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:46.488 [2024-07-24 16:35:43.200020] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:46.488 [2024-07-24 16:35:43.200054] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.488 [2024-07-24 16:35:43.200157] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.488 [2024-07-24 16:35:43.200225] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.488 [2024-07-24 16:35:43.200248] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1642623 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1642623 ']' 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1642623 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1642623 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1642623' 00:18:46.488 killing process with pid 1642623 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1642623 00:18:46.488 [2024-07-24 16:35:43.276442] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.488 16:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1642623 00:18:47.055 [2024-07-24 16:35:43.618614] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:48.961 00:18:48.961 real 0m28.932s 00:18:48.961 user 0m50.276s 00:18:48.961 sys 0m4.959s 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 ************************************ 00:18:48.961 END TEST raid_state_function_test 00:18:48.961 ************************************ 00:18:48.961 16:35:45 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:48.961 16:35:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:48.961 16:35:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:48.961 16:35:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 ************************************ 00:18:48.961 START TEST raid_state_function_test_sb 00:18:48.961 ************************************ 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1648214 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1648214' 00:18:48.961 Process raid pid: 1648214 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1648214 /var/tmp/spdk-raid.sock 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1648214 ']' 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:48.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.961 16:35:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.961 [2024-07-24 16:35:45.612928] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:18:48.961 [2024-07-24 16:35:45.613049] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.961 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:48.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.961 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:48.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.961 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:48.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.961 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:48.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.961 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:48.961 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.961 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:48.962 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:48.962 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:49.222 [2024-07-24 16:35:45.839162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.482 [2024-07-24 16:35:46.133741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.741 [2024-07-24 16:35:46.483498] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.741 [2024-07-24 16:35:46.483533] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.000 16:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.000 16:35:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:50.000 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:50.259 [2024-07-24 16:35:46.882985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.259 [2024-07-24 16:35:46.883040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.260 [2024-07-24 16:35:46.883055] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.260 [2024-07-24 16:35:46.883075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.260 [2024-07-24 16:35:46.883087] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.260 [2024-07-24 16:35:46.883105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.260 16:35:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.519 16:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:50.519 "name": "Existed_Raid", 00:18:50.519 "uuid": "d0203073-3a36-4100-861a-11fe0b441c5c", 00:18:50.519 "strip_size_kb": 64, 00:18:50.519 "state": "configuring", 00:18:50.519 "raid_level": "concat", 00:18:50.519 "superblock": true, 00:18:50.519 "num_base_bdevs": 3, 00:18:50.519 "num_base_bdevs_discovered": 0, 00:18:50.519 "num_base_bdevs_operational": 3, 00:18:50.519 "base_bdevs_list": [ 00:18:50.519 { 00:18:50.519 "name": "BaseBdev1", 00:18:50.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.519 "is_configured": false, 00:18:50.519 "data_offset": 0, 00:18:50.519 "data_size": 0 00:18:50.519 }, 00:18:50.519 { 00:18:50.519 "name": "BaseBdev2", 00:18:50.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.519 "is_configured": false, 00:18:50.519 "data_offset": 0, 00:18:50.519 "data_size": 0 00:18:50.519 }, 00:18:50.519 { 00:18:50.519 "name": "BaseBdev3", 00:18:50.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.519 "is_configured": false, 00:18:50.519 "data_offset": 0, 00:18:50.519 "data_size": 0 00:18:50.519 } 00:18:50.519 ] 00:18:50.519 }' 00:18:50.519 16:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:50.519 16:35:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.094 16:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:51.094 [2024-07-24 16:35:47.913610] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.094 [2024-07-24 16:35:47.913648] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:18:51.094 16:35:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:51.353 [2024-07-24 16:35:48.142303] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.353 [2024-07-24 16:35:48.142349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.353 [2024-07-24 16:35:48.142363] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.353 [2024-07-24 16:35:48.142383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.353 [2024-07-24 16:35:48.142394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:51.354 [2024-07-24 16:35:48.142414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:51.354 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:51.612 [2024-07-24 16:35:48.422448] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.612 BaseBdev1 00:18:51.612 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:51.612 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:51.612 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:51.612 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:51.612 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:51.613 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:51.613 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.871 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:52.129 [ 00:18:52.129 { 00:18:52.129 "name": "BaseBdev1", 00:18:52.129 "aliases": [ 00:18:52.129 "2398bede-7145-452f-ba4f-6b2381b4d0ba" 00:18:52.129 ], 00:18:52.129 "product_name": "Malloc disk", 00:18:52.129 "block_size": 512, 00:18:52.129 "num_blocks": 65536, 00:18:52.129 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:52.129 "assigned_rate_limits": { 00:18:52.129 "rw_ios_per_sec": 0, 00:18:52.129 "rw_mbytes_per_sec": 0, 00:18:52.129 "r_mbytes_per_sec": 0, 00:18:52.129 "w_mbytes_per_sec": 0 00:18:52.129 }, 00:18:52.129 "claimed": true, 00:18:52.129 "claim_type": "exclusive_write", 00:18:52.129 "zoned": false, 00:18:52.129 "supported_io_types": { 00:18:52.129 "read": true, 00:18:52.129 "write": true, 00:18:52.129 "unmap": true, 00:18:52.129 "flush": true, 00:18:52.129 "reset": true, 00:18:52.129 "nvme_admin": false, 00:18:52.129 "nvme_io": false, 00:18:52.129 "nvme_io_md": false, 00:18:52.129 "write_zeroes": true, 00:18:52.129 "zcopy": true, 00:18:52.129 "get_zone_info": false, 00:18:52.129 "zone_management": false, 00:18:52.129 "zone_append": false, 00:18:52.129 "compare": false, 00:18:52.129 "compare_and_write": false, 00:18:52.129 "abort": true, 00:18:52.129 "seek_hole": false, 00:18:52.129 "seek_data": false, 00:18:52.129 "copy": true, 00:18:52.129 "nvme_iov_md": false 00:18:52.129 }, 00:18:52.129 "memory_domains": [ 00:18:52.129 { 00:18:52.129 "dma_device_id": "system", 00:18:52.129 "dma_device_type": 1 00:18:52.129 }, 00:18:52.129 { 00:18:52.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.129 "dma_device_type": 2 00:18:52.129 } 00:18:52.129 ], 00:18:52.129 "driver_specific": {} 00:18:52.129 } 00:18:52.129 ] 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.129 16:35:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.388 16:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.388 "name": "Existed_Raid", 00:18:52.388 "uuid": "3ae0a2b4-a91d-4185-8d05-c17a38fd2ea1", 00:18:52.388 "strip_size_kb": 64, 00:18:52.388 "state": "configuring", 00:18:52.388 "raid_level": "concat", 00:18:52.388 "superblock": true, 00:18:52.388 "num_base_bdevs": 3, 00:18:52.388 "num_base_bdevs_discovered": 1, 00:18:52.388 "num_base_bdevs_operational": 3, 00:18:52.388 "base_bdevs_list": [ 00:18:52.388 { 00:18:52.388 "name": "BaseBdev1", 00:18:52.388 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:52.388 "is_configured": true, 00:18:52.388 "data_offset": 2048, 00:18:52.388 "data_size": 63488 00:18:52.388 }, 00:18:52.388 { 00:18:52.388 "name": "BaseBdev2", 00:18:52.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.388 "is_configured": false, 00:18:52.388 "data_offset": 0, 00:18:52.388 "data_size": 0 00:18:52.388 }, 00:18:52.388 { 00:18:52.388 "name": "BaseBdev3", 00:18:52.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.388 "is_configured": false, 00:18:52.388 "data_offset": 0, 00:18:52.388 "data_size": 0 00:18:52.388 } 00:18:52.388 ] 00:18:52.388 }' 00:18:52.388 16:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.388 16:35:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.955 16:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:53.214 [2024-07-24 16:35:49.890450] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:53.214 [2024-07-24 16:35:49.890502] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:18:53.214 16:35:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:53.472 [2024-07-24 16:35:50.119205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.472 [2024-07-24 16:35:50.121543] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.472 [2024-07-24 16:35:50.121587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.472 [2024-07-24 16:35:50.121601] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:53.472 [2024-07-24 16:35:50.121618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.472 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.730 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:53.730 "name": "Existed_Raid", 00:18:53.730 "uuid": "4f43e1fb-6e98-4c27-866c-767ed7741c21", 00:18:53.730 "strip_size_kb": 64, 00:18:53.730 "state": "configuring", 00:18:53.731 "raid_level": "concat", 00:18:53.731 "superblock": true, 00:18:53.731 "num_base_bdevs": 3, 00:18:53.731 "num_base_bdevs_discovered": 1, 00:18:53.731 "num_base_bdevs_operational": 3, 00:18:53.731 "base_bdevs_list": [ 00:18:53.731 { 00:18:53.731 "name": "BaseBdev1", 00:18:53.731 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:53.731 "is_configured": true, 00:18:53.731 "data_offset": 2048, 00:18:53.731 "data_size": 63488 00:18:53.731 }, 00:18:53.731 { 00:18:53.731 "name": "BaseBdev2", 00:18:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.731 "is_configured": false, 00:18:53.731 "data_offset": 0, 00:18:53.731 "data_size": 0 00:18:53.731 }, 00:18:53.731 { 00:18:53.731 "name": "BaseBdev3", 00:18:53.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.731 "is_configured": false, 00:18:53.731 "data_offset": 0, 00:18:53.731 "data_size": 0 00:18:53.731 } 00:18:53.731 ] 00:18:53.731 }' 00:18:53.731 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:53.731 16:35:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.297 16:35:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:54.297 [2024-07-24 16:35:51.128916] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.297 BaseBdev2 00:18:54.297 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:54.297 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:54.297 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:54.297 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:54.297 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:54.298 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:54.298 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.556 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:55.183 [ 00:18:55.183 { 00:18:55.183 "name": "BaseBdev2", 00:18:55.183 "aliases": [ 00:18:55.183 "899420fe-fe92-44f0-8a05-5ba0b200136f" 00:18:55.183 ], 00:18:55.183 "product_name": "Malloc disk", 00:18:55.183 "block_size": 512, 00:18:55.183 "num_blocks": 65536, 00:18:55.183 "uuid": "899420fe-fe92-44f0-8a05-5ba0b200136f", 00:18:55.183 "assigned_rate_limits": { 00:18:55.183 "rw_ios_per_sec": 0, 00:18:55.183 "rw_mbytes_per_sec": 0, 00:18:55.183 "r_mbytes_per_sec": 0, 00:18:55.183 "w_mbytes_per_sec": 0 00:18:55.183 }, 00:18:55.183 "claimed": true, 00:18:55.183 "claim_type": "exclusive_write", 00:18:55.183 "zoned": false, 00:18:55.183 "supported_io_types": { 00:18:55.183 "read": true, 00:18:55.183 "write": true, 00:18:55.183 "unmap": true, 00:18:55.183 "flush": true, 00:18:55.183 "reset": true, 00:18:55.183 "nvme_admin": false, 00:18:55.183 "nvme_io": false, 00:18:55.183 "nvme_io_md": false, 00:18:55.183 "write_zeroes": true, 00:18:55.183 "zcopy": true, 00:18:55.183 "get_zone_info": false, 00:18:55.183 "zone_management": false, 00:18:55.183 "zone_append": false, 00:18:55.183 "compare": false, 00:18:55.183 "compare_and_write": false, 00:18:55.183 "abort": true, 00:18:55.183 "seek_hole": false, 00:18:55.183 "seek_data": false, 00:18:55.183 "copy": true, 00:18:55.183 "nvme_iov_md": false 00:18:55.183 }, 00:18:55.183 "memory_domains": [ 00:18:55.183 { 00:18:55.183 "dma_device_id": "system", 00:18:55.183 "dma_device_type": 1 00:18:55.183 }, 00:18:55.183 { 00:18:55.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.183 "dma_device_type": 2 00:18:55.183 } 00:18:55.183 ], 00:18:55.183 "driver_specific": {} 00:18:55.183 } 00:18:55.183 ] 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.183 16:35:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.442 16:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.442 "name": "Existed_Raid", 00:18:55.442 "uuid": "4f43e1fb-6e98-4c27-866c-767ed7741c21", 00:18:55.442 "strip_size_kb": 64, 00:18:55.442 "state": "configuring", 00:18:55.442 "raid_level": "concat", 00:18:55.442 "superblock": true, 00:18:55.442 "num_base_bdevs": 3, 00:18:55.442 "num_base_bdevs_discovered": 2, 00:18:55.442 "num_base_bdevs_operational": 3, 00:18:55.442 "base_bdevs_list": [ 00:18:55.442 { 00:18:55.442 "name": "BaseBdev1", 00:18:55.442 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:55.442 "is_configured": true, 00:18:55.442 "data_offset": 2048, 00:18:55.442 "data_size": 63488 00:18:55.442 }, 00:18:55.442 { 00:18:55.442 "name": "BaseBdev2", 00:18:55.442 "uuid": "899420fe-fe92-44f0-8a05-5ba0b200136f", 00:18:55.442 "is_configured": true, 00:18:55.442 "data_offset": 2048, 00:18:55.442 "data_size": 63488 00:18:55.442 }, 00:18:55.442 { 00:18:55.442 "name": "BaseBdev3", 00:18:55.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.442 "is_configured": false, 00:18:55.442 "data_offset": 0, 00:18:55.442 "data_size": 0 00:18:55.442 } 00:18:55.442 ] 00:18:55.442 }' 00:18:55.442 16:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.442 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.009 16:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.267 [2024-07-24 16:35:52.903613] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.267 [2024-07-24 16:35:52.903870] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:18:56.267 [2024-07-24 16:35:52.903897] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:56.267 [2024-07-24 16:35:52.904223] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:56.267 [2024-07-24 16:35:52.904464] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:18:56.267 [2024-07-24 16:35:52.904479] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:18:56.267 [2024-07-24 16:35:52.904671] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.267 BaseBdev3 00:18:56.267 16:35:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:56.267 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:56.267 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:56.267 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:56.267 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:56.268 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:56.268 16:35:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:56.268 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:56.526 [ 00:18:56.526 { 00:18:56.526 "name": "BaseBdev3", 00:18:56.526 "aliases": [ 00:18:56.526 "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8" 00:18:56.526 ], 00:18:56.526 "product_name": "Malloc disk", 00:18:56.526 "block_size": 512, 00:18:56.526 "num_blocks": 65536, 00:18:56.526 "uuid": "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8", 00:18:56.526 "assigned_rate_limits": { 00:18:56.526 "rw_ios_per_sec": 0, 00:18:56.526 "rw_mbytes_per_sec": 0, 00:18:56.526 "r_mbytes_per_sec": 0, 00:18:56.526 "w_mbytes_per_sec": 0 00:18:56.526 }, 00:18:56.526 "claimed": true, 00:18:56.526 "claim_type": "exclusive_write", 00:18:56.526 "zoned": false, 00:18:56.526 "supported_io_types": { 00:18:56.526 "read": true, 00:18:56.526 "write": true, 00:18:56.526 "unmap": true, 00:18:56.526 "flush": true, 00:18:56.526 "reset": true, 00:18:56.526 "nvme_admin": false, 00:18:56.526 "nvme_io": false, 00:18:56.526 "nvme_io_md": false, 00:18:56.526 "write_zeroes": true, 00:18:56.526 "zcopy": true, 00:18:56.526 "get_zone_info": false, 00:18:56.526 "zone_management": false, 00:18:56.526 "zone_append": false, 00:18:56.526 "compare": false, 00:18:56.526 "compare_and_write": false, 00:18:56.526 "abort": true, 00:18:56.526 "seek_hole": false, 00:18:56.526 "seek_data": false, 00:18:56.526 "copy": true, 00:18:56.526 "nvme_iov_md": false 00:18:56.526 }, 00:18:56.526 "memory_domains": [ 00:18:56.526 { 00:18:56.527 "dma_device_id": "system", 00:18:56.527 "dma_device_type": 1 00:18:56.527 }, 00:18:56.527 { 00:18:56.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.527 "dma_device_type": 2 00:18:56.527 } 00:18:56.527 ], 00:18:56.527 "driver_specific": {} 00:18:56.527 } 00:18:56.527 ] 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.527 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.785 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.785 "name": "Existed_Raid", 00:18:56.785 "uuid": "4f43e1fb-6e98-4c27-866c-767ed7741c21", 00:18:56.785 "strip_size_kb": 64, 00:18:56.785 "state": "online", 00:18:56.785 "raid_level": "concat", 00:18:56.785 "superblock": true, 00:18:56.785 "num_base_bdevs": 3, 00:18:56.785 "num_base_bdevs_discovered": 3, 00:18:56.785 "num_base_bdevs_operational": 3, 00:18:56.785 "base_bdevs_list": [ 00:18:56.785 { 00:18:56.785 "name": "BaseBdev1", 00:18:56.785 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:56.785 "is_configured": true, 00:18:56.785 "data_offset": 2048, 00:18:56.785 "data_size": 63488 00:18:56.785 }, 00:18:56.785 { 00:18:56.785 "name": "BaseBdev2", 00:18:56.785 "uuid": "899420fe-fe92-44f0-8a05-5ba0b200136f", 00:18:56.785 "is_configured": true, 00:18:56.785 "data_offset": 2048, 00:18:56.785 "data_size": 63488 00:18:56.785 }, 00:18:56.785 { 00:18:56.785 "name": "BaseBdev3", 00:18:56.785 "uuid": "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8", 00:18:56.785 "is_configured": true, 00:18:56.785 "data_offset": 2048, 00:18:56.785 "data_size": 63488 00:18:56.785 } 00:18:56.785 ] 00:18:56.785 }' 00:18:56.785 16:35:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.785 16:35:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.351 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:57.351 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:57.351 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:57.351 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:57.352 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:57.352 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:57.352 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:57.352 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:57.352 [2024-07-24 16:35:54.211560] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.609 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:57.609 "name": "Existed_Raid", 00:18:57.609 "aliases": [ 00:18:57.609 "4f43e1fb-6e98-4c27-866c-767ed7741c21" 00:18:57.610 ], 00:18:57.610 "product_name": "Raid Volume", 00:18:57.610 "block_size": 512, 00:18:57.610 "num_blocks": 190464, 00:18:57.610 "uuid": "4f43e1fb-6e98-4c27-866c-767ed7741c21", 00:18:57.610 "assigned_rate_limits": { 00:18:57.610 "rw_ios_per_sec": 0, 00:18:57.610 "rw_mbytes_per_sec": 0, 00:18:57.610 "r_mbytes_per_sec": 0, 00:18:57.610 "w_mbytes_per_sec": 0 00:18:57.610 }, 00:18:57.610 "claimed": false, 00:18:57.610 "zoned": false, 00:18:57.610 "supported_io_types": { 00:18:57.610 "read": true, 00:18:57.610 "write": true, 00:18:57.610 "unmap": true, 00:18:57.610 "flush": true, 00:18:57.610 "reset": true, 00:18:57.610 "nvme_admin": false, 00:18:57.610 "nvme_io": false, 00:18:57.610 "nvme_io_md": false, 00:18:57.610 "write_zeroes": true, 00:18:57.610 "zcopy": false, 00:18:57.610 "get_zone_info": false, 00:18:57.610 "zone_management": false, 00:18:57.610 "zone_append": false, 00:18:57.610 "compare": false, 00:18:57.610 "compare_and_write": false, 00:18:57.610 "abort": false, 00:18:57.610 "seek_hole": false, 00:18:57.610 "seek_data": false, 00:18:57.610 "copy": false, 00:18:57.610 "nvme_iov_md": false 00:18:57.610 }, 00:18:57.610 "memory_domains": [ 00:18:57.610 { 00:18:57.610 "dma_device_id": "system", 00:18:57.610 "dma_device_type": 1 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.610 "dma_device_type": 2 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "dma_device_id": "system", 00:18:57.610 "dma_device_type": 1 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.610 "dma_device_type": 2 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "dma_device_id": "system", 00:18:57.610 "dma_device_type": 1 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.610 "dma_device_type": 2 00:18:57.610 } 00:18:57.610 ], 00:18:57.610 "driver_specific": { 00:18:57.610 "raid": { 00:18:57.610 "uuid": "4f43e1fb-6e98-4c27-866c-767ed7741c21", 00:18:57.610 "strip_size_kb": 64, 00:18:57.610 "state": "online", 00:18:57.610 "raid_level": "concat", 00:18:57.610 "superblock": true, 00:18:57.610 "num_base_bdevs": 3, 00:18:57.610 "num_base_bdevs_discovered": 3, 00:18:57.610 "num_base_bdevs_operational": 3, 00:18:57.610 "base_bdevs_list": [ 00:18:57.610 { 00:18:57.610 "name": "BaseBdev1", 00:18:57.610 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:57.610 "is_configured": true, 00:18:57.610 "data_offset": 2048, 00:18:57.610 "data_size": 63488 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "name": "BaseBdev2", 00:18:57.610 "uuid": "899420fe-fe92-44f0-8a05-5ba0b200136f", 00:18:57.610 "is_configured": true, 00:18:57.610 "data_offset": 2048, 00:18:57.610 "data_size": 63488 00:18:57.610 }, 00:18:57.610 { 00:18:57.610 "name": "BaseBdev3", 00:18:57.610 "uuid": "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8", 00:18:57.610 "is_configured": true, 00:18:57.610 "data_offset": 2048, 00:18:57.610 "data_size": 63488 00:18:57.610 } 00:18:57.610 ] 00:18:57.610 } 00:18:57.610 } 00:18:57.610 }' 00:18:57.610 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.610 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:57.610 BaseBdev2 00:18:57.610 BaseBdev3' 00:18:57.610 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.610 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:57.610 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.868 "name": "BaseBdev1", 00:18:57.868 "aliases": [ 00:18:57.868 "2398bede-7145-452f-ba4f-6b2381b4d0ba" 00:18:57.868 ], 00:18:57.868 "product_name": "Malloc disk", 00:18:57.868 "block_size": 512, 00:18:57.868 "num_blocks": 65536, 00:18:57.868 "uuid": "2398bede-7145-452f-ba4f-6b2381b4d0ba", 00:18:57.868 "assigned_rate_limits": { 00:18:57.868 "rw_ios_per_sec": 0, 00:18:57.868 "rw_mbytes_per_sec": 0, 00:18:57.868 "r_mbytes_per_sec": 0, 00:18:57.868 "w_mbytes_per_sec": 0 00:18:57.868 }, 00:18:57.868 "claimed": true, 00:18:57.868 "claim_type": "exclusive_write", 00:18:57.868 "zoned": false, 00:18:57.868 "supported_io_types": { 00:18:57.868 "read": true, 00:18:57.868 "write": true, 00:18:57.868 "unmap": true, 00:18:57.868 "flush": true, 00:18:57.868 "reset": true, 00:18:57.868 "nvme_admin": false, 00:18:57.868 "nvme_io": false, 00:18:57.868 "nvme_io_md": false, 00:18:57.868 "write_zeroes": true, 00:18:57.868 "zcopy": true, 00:18:57.868 "get_zone_info": false, 00:18:57.868 "zone_management": false, 00:18:57.868 "zone_append": false, 00:18:57.868 "compare": false, 00:18:57.868 "compare_and_write": false, 00:18:57.868 "abort": true, 00:18:57.868 "seek_hole": false, 00:18:57.868 "seek_data": false, 00:18:57.868 "copy": true, 00:18:57.868 "nvme_iov_md": false 00:18:57.868 }, 00:18:57.868 "memory_domains": [ 00:18:57.868 { 00:18:57.868 "dma_device_id": "system", 00:18:57.868 "dma_device_type": 1 00:18:57.868 }, 00:18:57.868 { 00:18:57.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.868 "dma_device_type": 2 00:18:57.868 } 00:18:57.868 ], 00:18:57.868 "driver_specific": {} 00:18:57.868 }' 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.868 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:58.126 16:35:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.383 "name": "BaseBdev2", 00:18:58.383 "aliases": [ 00:18:58.383 "899420fe-fe92-44f0-8a05-5ba0b200136f" 00:18:58.383 ], 00:18:58.383 "product_name": "Malloc disk", 00:18:58.383 "block_size": 512, 00:18:58.383 "num_blocks": 65536, 00:18:58.383 "uuid": "899420fe-fe92-44f0-8a05-5ba0b200136f", 00:18:58.383 "assigned_rate_limits": { 00:18:58.383 "rw_ios_per_sec": 0, 00:18:58.383 "rw_mbytes_per_sec": 0, 00:18:58.383 "r_mbytes_per_sec": 0, 00:18:58.383 "w_mbytes_per_sec": 0 00:18:58.383 }, 00:18:58.383 "claimed": true, 00:18:58.383 "claim_type": "exclusive_write", 00:18:58.383 "zoned": false, 00:18:58.383 "supported_io_types": { 00:18:58.383 "read": true, 00:18:58.383 "write": true, 00:18:58.383 "unmap": true, 00:18:58.383 "flush": true, 00:18:58.383 "reset": true, 00:18:58.383 "nvme_admin": false, 00:18:58.383 "nvme_io": false, 00:18:58.383 "nvme_io_md": false, 00:18:58.383 "write_zeroes": true, 00:18:58.383 "zcopy": true, 00:18:58.383 "get_zone_info": false, 00:18:58.383 "zone_management": false, 00:18:58.383 "zone_append": false, 00:18:58.383 "compare": false, 00:18:58.383 "compare_and_write": false, 00:18:58.383 "abort": true, 00:18:58.383 "seek_hole": false, 00:18:58.383 "seek_data": false, 00:18:58.383 "copy": true, 00:18:58.383 "nvme_iov_md": false 00:18:58.383 }, 00:18:58.383 "memory_domains": [ 00:18:58.383 { 00:18:58.383 "dma_device_id": "system", 00:18:58.383 "dma_device_type": 1 00:18:58.383 }, 00:18:58.383 { 00:18:58.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.383 "dma_device_type": 2 00:18:58.383 } 00:18:58.383 ], 00:18:58.383 "driver_specific": {} 00:18:58.383 }' 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.383 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:58.641 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:58.899 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:58.899 "name": "BaseBdev3", 00:18:58.899 "aliases": [ 00:18:58.899 "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8" 00:18:58.899 ], 00:18:58.899 "product_name": "Malloc disk", 00:18:58.899 "block_size": 512, 00:18:58.899 "num_blocks": 65536, 00:18:58.899 "uuid": "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8", 00:18:58.899 "assigned_rate_limits": { 00:18:58.899 "rw_ios_per_sec": 0, 00:18:58.899 "rw_mbytes_per_sec": 0, 00:18:58.899 "r_mbytes_per_sec": 0, 00:18:58.899 "w_mbytes_per_sec": 0 00:18:58.899 }, 00:18:58.899 "claimed": true, 00:18:58.899 "claim_type": "exclusive_write", 00:18:58.899 "zoned": false, 00:18:58.899 "supported_io_types": { 00:18:58.899 "read": true, 00:18:58.899 "write": true, 00:18:58.899 "unmap": true, 00:18:58.899 "flush": true, 00:18:58.899 "reset": true, 00:18:58.899 "nvme_admin": false, 00:18:58.899 "nvme_io": false, 00:18:58.899 "nvme_io_md": false, 00:18:58.899 "write_zeroes": true, 00:18:58.899 "zcopy": true, 00:18:58.899 "get_zone_info": false, 00:18:58.899 "zone_management": false, 00:18:58.899 "zone_append": false, 00:18:58.899 "compare": false, 00:18:58.899 "compare_and_write": false, 00:18:58.899 "abort": true, 00:18:58.899 "seek_hole": false, 00:18:58.899 "seek_data": false, 00:18:58.899 "copy": true, 00:18:58.899 "nvme_iov_md": false 00:18:58.899 }, 00:18:58.899 "memory_domains": [ 00:18:58.899 { 00:18:58.899 "dma_device_id": "system", 00:18:58.899 "dma_device_type": 1 00:18:58.899 }, 00:18:58.899 { 00:18:58.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.899 "dma_device_type": 2 00:18:58.899 } 00:18:58.899 ], 00:18:58.899 "driver_specific": {} 00:18:58.899 }' 00:18:58.899 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.899 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:58.899 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:58.899 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.899 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:59.157 16:35:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:59.416 [2024-07-24 16:35:56.160583] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:59.416 [2024-07-24 16:35:56.160613] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.416 [2024-07-24 16:35:56.160673] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.416 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.674 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.674 "name": "Existed_Raid", 00:18:59.674 "uuid": "4f43e1fb-6e98-4c27-866c-767ed7741c21", 00:18:59.674 "strip_size_kb": 64, 00:18:59.674 "state": "offline", 00:18:59.674 "raid_level": "concat", 00:18:59.674 "superblock": true, 00:18:59.674 "num_base_bdevs": 3, 00:18:59.674 "num_base_bdevs_discovered": 2, 00:18:59.674 "num_base_bdevs_operational": 2, 00:18:59.674 "base_bdevs_list": [ 00:18:59.674 { 00:18:59.674 "name": null, 00:18:59.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.674 "is_configured": false, 00:18:59.674 "data_offset": 2048, 00:18:59.674 "data_size": 63488 00:18:59.674 }, 00:18:59.674 { 00:18:59.674 "name": "BaseBdev2", 00:18:59.674 "uuid": "899420fe-fe92-44f0-8a05-5ba0b200136f", 00:18:59.674 "is_configured": true, 00:18:59.674 "data_offset": 2048, 00:18:59.674 "data_size": 63488 00:18:59.674 }, 00:18:59.674 { 00:18:59.674 "name": "BaseBdev3", 00:18:59.674 "uuid": "fb4a12bb-2a0f-46c1-a520-ef48fa2a5fb8", 00:18:59.674 "is_configured": true, 00:18:59.674 "data_offset": 2048, 00:18:59.674 "data_size": 63488 00:18:59.674 } 00:18:59.674 ] 00:18:59.674 }' 00:18:59.674 16:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.674 16:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:00.239 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:00.239 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:00.239 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.239 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:00.495 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:00.495 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.495 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:00.752 [2024-07-24 16:35:57.446334] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.752 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:00.752 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:00.752 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.752 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:01.010 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:01.010 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:01.010 16:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:01.267 [2024-07-24 16:35:58.031471] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:01.267 [2024-07-24 16:35:58.031525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:19:01.525 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:01.525 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:01.525 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.525 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.784 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:01.784 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:01.784 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:01.784 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:01.784 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:01.784 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.042 BaseBdev2 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:02.042 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.300 16:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:02.300 [ 00:19:02.300 { 00:19:02.300 "name": "BaseBdev2", 00:19:02.300 "aliases": [ 00:19:02.300 "dd52103f-9827-4354-9c68-1cbe8d0d4ea7" 00:19:02.300 ], 00:19:02.300 "product_name": "Malloc disk", 00:19:02.300 "block_size": 512, 00:19:02.300 "num_blocks": 65536, 00:19:02.301 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:02.301 "assigned_rate_limits": { 00:19:02.301 "rw_ios_per_sec": 0, 00:19:02.301 "rw_mbytes_per_sec": 0, 00:19:02.301 "r_mbytes_per_sec": 0, 00:19:02.301 "w_mbytes_per_sec": 0 00:19:02.301 }, 00:19:02.301 "claimed": false, 00:19:02.301 "zoned": false, 00:19:02.301 "supported_io_types": { 00:19:02.301 "read": true, 00:19:02.301 "write": true, 00:19:02.301 "unmap": true, 00:19:02.301 "flush": true, 00:19:02.301 "reset": true, 00:19:02.301 "nvme_admin": false, 00:19:02.301 "nvme_io": false, 00:19:02.301 "nvme_io_md": false, 00:19:02.301 "write_zeroes": true, 00:19:02.301 "zcopy": true, 00:19:02.301 "get_zone_info": false, 00:19:02.301 "zone_management": false, 00:19:02.301 "zone_append": false, 00:19:02.301 "compare": false, 00:19:02.301 "compare_and_write": false, 00:19:02.301 "abort": true, 00:19:02.301 "seek_hole": false, 00:19:02.301 "seek_data": false, 00:19:02.301 "copy": true, 00:19:02.301 "nvme_iov_md": false 00:19:02.301 }, 00:19:02.301 "memory_domains": [ 00:19:02.301 { 00:19:02.301 "dma_device_id": "system", 00:19:02.301 "dma_device_type": 1 00:19:02.301 }, 00:19:02.301 { 00:19:02.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.301 "dma_device_type": 2 00:19:02.301 } 00:19:02.301 ], 00:19:02.301 "driver_specific": {} 00:19:02.301 } 00:19:02.301 ] 00:19:02.301 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:02.301 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:02.301 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:02.301 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:02.559 BaseBdev3 00:19:02.559 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:02.559 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:02.817 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:02.817 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:02.817 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:02.817 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:02.817 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.817 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:03.075 [ 00:19:03.075 { 00:19:03.075 "name": "BaseBdev3", 00:19:03.075 "aliases": [ 00:19:03.075 "07fb47c4-72e1-4a5a-8e4f-689107db1fc4" 00:19:03.075 ], 00:19:03.075 "product_name": "Malloc disk", 00:19:03.075 "block_size": 512, 00:19:03.075 "num_blocks": 65536, 00:19:03.075 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:03.075 "assigned_rate_limits": { 00:19:03.075 "rw_ios_per_sec": 0, 00:19:03.075 "rw_mbytes_per_sec": 0, 00:19:03.075 "r_mbytes_per_sec": 0, 00:19:03.075 "w_mbytes_per_sec": 0 00:19:03.075 }, 00:19:03.075 "claimed": false, 00:19:03.075 "zoned": false, 00:19:03.075 "supported_io_types": { 00:19:03.075 "read": true, 00:19:03.075 "write": true, 00:19:03.075 "unmap": true, 00:19:03.075 "flush": true, 00:19:03.075 "reset": true, 00:19:03.075 "nvme_admin": false, 00:19:03.075 "nvme_io": false, 00:19:03.075 "nvme_io_md": false, 00:19:03.075 "write_zeroes": true, 00:19:03.075 "zcopy": true, 00:19:03.075 "get_zone_info": false, 00:19:03.075 "zone_management": false, 00:19:03.075 "zone_append": false, 00:19:03.075 "compare": false, 00:19:03.075 "compare_and_write": false, 00:19:03.075 "abort": true, 00:19:03.075 "seek_hole": false, 00:19:03.075 "seek_data": false, 00:19:03.075 "copy": true, 00:19:03.075 "nvme_iov_md": false 00:19:03.075 }, 00:19:03.075 "memory_domains": [ 00:19:03.075 { 00:19:03.075 "dma_device_id": "system", 00:19:03.075 "dma_device_type": 1 00:19:03.075 }, 00:19:03.075 { 00:19:03.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.075 "dma_device_type": 2 00:19:03.075 } 00:19:03.075 ], 00:19:03.075 "driver_specific": {} 00:19:03.075 } 00:19:03.075 ] 00:19:03.075 16:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:03.075 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:03.075 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:03.075 16:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:03.333 [2024-07-24 16:36:00.094768] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.333 [2024-07-24 16:36:00.094820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.333 [2024-07-24 16:36:00.094858] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.333 [2024-07-24 16:36:00.097172] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.333 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.590 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.590 "name": "Existed_Raid", 00:19:03.590 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:03.590 "strip_size_kb": 64, 00:19:03.590 "state": "configuring", 00:19:03.590 "raid_level": "concat", 00:19:03.590 "superblock": true, 00:19:03.590 "num_base_bdevs": 3, 00:19:03.590 "num_base_bdevs_discovered": 2, 00:19:03.590 "num_base_bdevs_operational": 3, 00:19:03.590 "base_bdevs_list": [ 00:19:03.590 { 00:19:03.590 "name": "BaseBdev1", 00:19:03.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.590 "is_configured": false, 00:19:03.590 "data_offset": 0, 00:19:03.590 "data_size": 0 00:19:03.590 }, 00:19:03.590 { 00:19:03.590 "name": "BaseBdev2", 00:19:03.590 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:03.590 "is_configured": true, 00:19:03.590 "data_offset": 2048, 00:19:03.590 "data_size": 63488 00:19:03.590 }, 00:19:03.590 { 00:19:03.590 "name": "BaseBdev3", 00:19:03.590 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:03.591 "is_configured": true, 00:19:03.591 "data_offset": 2048, 00:19:03.591 "data_size": 63488 00:19:03.591 } 00:19:03.591 ] 00:19:03.591 }' 00:19:03.591 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.591 16:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.153 16:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:04.410 [2024-07-24 16:36:01.125520] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.410 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.665 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.665 "name": "Existed_Raid", 00:19:04.665 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:04.665 "strip_size_kb": 64, 00:19:04.665 "state": "configuring", 00:19:04.665 "raid_level": "concat", 00:19:04.665 "superblock": true, 00:19:04.665 "num_base_bdevs": 3, 00:19:04.665 "num_base_bdevs_discovered": 1, 00:19:04.665 "num_base_bdevs_operational": 3, 00:19:04.665 "base_bdevs_list": [ 00:19:04.665 { 00:19:04.665 "name": "BaseBdev1", 00:19:04.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.665 "is_configured": false, 00:19:04.665 "data_offset": 0, 00:19:04.665 "data_size": 0 00:19:04.666 }, 00:19:04.666 { 00:19:04.666 "name": null, 00:19:04.666 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:04.666 "is_configured": false, 00:19:04.666 "data_offset": 2048, 00:19:04.666 "data_size": 63488 00:19:04.666 }, 00:19:04.666 { 00:19:04.666 "name": "BaseBdev3", 00:19:04.666 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:04.666 "is_configured": true, 00:19:04.666 "data_offset": 2048, 00:19:04.666 "data_size": 63488 00:19:04.666 } 00:19:04.666 ] 00:19:04.666 }' 00:19:04.666 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.666 16:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.229 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.229 16:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:05.487 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:05.487 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:05.745 [2024-07-24 16:36:02.424541] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.745 BaseBdev1 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:05.745 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.004 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:06.262 [ 00:19:06.262 { 00:19:06.262 "name": "BaseBdev1", 00:19:06.262 "aliases": [ 00:19:06.262 "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7" 00:19:06.262 ], 00:19:06.262 "product_name": "Malloc disk", 00:19:06.262 "block_size": 512, 00:19:06.262 "num_blocks": 65536, 00:19:06.262 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:06.262 "assigned_rate_limits": { 00:19:06.262 "rw_ios_per_sec": 0, 00:19:06.262 "rw_mbytes_per_sec": 0, 00:19:06.262 "r_mbytes_per_sec": 0, 00:19:06.262 "w_mbytes_per_sec": 0 00:19:06.262 }, 00:19:06.262 "claimed": true, 00:19:06.262 "claim_type": "exclusive_write", 00:19:06.262 "zoned": false, 00:19:06.262 "supported_io_types": { 00:19:06.262 "read": true, 00:19:06.262 "write": true, 00:19:06.262 "unmap": true, 00:19:06.262 "flush": true, 00:19:06.262 "reset": true, 00:19:06.262 "nvme_admin": false, 00:19:06.262 "nvme_io": false, 00:19:06.262 "nvme_io_md": false, 00:19:06.262 "write_zeroes": true, 00:19:06.262 "zcopy": true, 00:19:06.262 "get_zone_info": false, 00:19:06.262 "zone_management": false, 00:19:06.262 "zone_append": false, 00:19:06.262 "compare": false, 00:19:06.262 "compare_and_write": false, 00:19:06.262 "abort": true, 00:19:06.262 "seek_hole": false, 00:19:06.262 "seek_data": false, 00:19:06.262 "copy": true, 00:19:06.262 "nvme_iov_md": false 00:19:06.262 }, 00:19:06.262 "memory_domains": [ 00:19:06.262 { 00:19:06.262 "dma_device_id": "system", 00:19:06.262 "dma_device_type": 1 00:19:06.262 }, 00:19:06.262 { 00:19:06.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.262 "dma_device_type": 2 00:19:06.262 } 00:19:06.262 ], 00:19:06.262 "driver_specific": {} 00:19:06.262 } 00:19:06.262 ] 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.262 16:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.262 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:06.262 "name": "Existed_Raid", 00:19:06.262 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:06.262 "strip_size_kb": 64, 00:19:06.262 "state": "configuring", 00:19:06.262 "raid_level": "concat", 00:19:06.262 "superblock": true, 00:19:06.262 "num_base_bdevs": 3, 00:19:06.262 "num_base_bdevs_discovered": 2, 00:19:06.262 "num_base_bdevs_operational": 3, 00:19:06.262 "base_bdevs_list": [ 00:19:06.262 { 00:19:06.262 "name": "BaseBdev1", 00:19:06.262 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:06.262 "is_configured": true, 00:19:06.262 "data_offset": 2048, 00:19:06.262 "data_size": 63488 00:19:06.263 }, 00:19:06.263 { 00:19:06.263 "name": null, 00:19:06.263 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:06.263 "is_configured": false, 00:19:06.263 "data_offset": 2048, 00:19:06.263 "data_size": 63488 00:19:06.263 }, 00:19:06.263 { 00:19:06.263 "name": "BaseBdev3", 00:19:06.263 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:06.263 "is_configured": true, 00:19:06.263 "data_offset": 2048, 00:19:06.263 "data_size": 63488 00:19:06.263 } 00:19:06.263 ] 00:19:06.263 }' 00:19:06.263 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:06.263 16:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.828 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.828 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:07.085 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:07.085 16:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:07.343 [2024-07-24 16:36:04.061105] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.343 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.660 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.660 "name": "Existed_Raid", 00:19:07.660 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:07.660 "strip_size_kb": 64, 00:19:07.660 "state": "configuring", 00:19:07.660 "raid_level": "concat", 00:19:07.660 "superblock": true, 00:19:07.660 "num_base_bdevs": 3, 00:19:07.660 "num_base_bdevs_discovered": 1, 00:19:07.660 "num_base_bdevs_operational": 3, 00:19:07.660 "base_bdevs_list": [ 00:19:07.660 { 00:19:07.660 "name": "BaseBdev1", 00:19:07.660 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:07.660 "is_configured": true, 00:19:07.660 "data_offset": 2048, 00:19:07.660 "data_size": 63488 00:19:07.660 }, 00:19:07.660 { 00:19:07.660 "name": null, 00:19:07.660 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:07.660 "is_configured": false, 00:19:07.660 "data_offset": 2048, 00:19:07.660 "data_size": 63488 00:19:07.660 }, 00:19:07.660 { 00:19:07.660 "name": null, 00:19:07.660 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:07.660 "is_configured": false, 00:19:07.660 "data_offset": 2048, 00:19:07.660 "data_size": 63488 00:19:07.660 } 00:19:07.660 ] 00:19:07.660 }' 00:19:07.660 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.660 16:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.228 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.228 16:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:08.228 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:08.228 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:08.486 [2024-07-24 16:36:05.256381] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.487 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.746 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.746 "name": "Existed_Raid", 00:19:08.746 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:08.746 "strip_size_kb": 64, 00:19:08.746 "state": "configuring", 00:19:08.746 "raid_level": "concat", 00:19:08.746 "superblock": true, 00:19:08.746 "num_base_bdevs": 3, 00:19:08.746 "num_base_bdevs_discovered": 2, 00:19:08.746 "num_base_bdevs_operational": 3, 00:19:08.746 "base_bdevs_list": [ 00:19:08.746 { 00:19:08.746 "name": "BaseBdev1", 00:19:08.746 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:08.746 "is_configured": true, 00:19:08.746 "data_offset": 2048, 00:19:08.746 "data_size": 63488 00:19:08.746 }, 00:19:08.746 { 00:19:08.746 "name": null, 00:19:08.746 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:08.746 "is_configured": false, 00:19:08.746 "data_offset": 2048, 00:19:08.746 "data_size": 63488 00:19:08.746 }, 00:19:08.746 { 00:19:08.746 "name": "BaseBdev3", 00:19:08.746 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:08.746 "is_configured": true, 00:19:08.746 "data_offset": 2048, 00:19:08.746 "data_size": 63488 00:19:08.746 } 00:19:08.746 ] 00:19:08.746 }' 00:19:08.746 16:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.746 16:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.314 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.314 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:09.573 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:09.573 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:09.832 [2024-07-24 16:36:06.479752] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.832 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:09.832 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.832 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.833 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.092 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.092 "name": "Existed_Raid", 00:19:10.092 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:10.092 "strip_size_kb": 64, 00:19:10.092 "state": "configuring", 00:19:10.092 "raid_level": "concat", 00:19:10.092 "superblock": true, 00:19:10.092 "num_base_bdevs": 3, 00:19:10.092 "num_base_bdevs_discovered": 1, 00:19:10.092 "num_base_bdevs_operational": 3, 00:19:10.092 "base_bdevs_list": [ 00:19:10.092 { 00:19:10.092 "name": null, 00:19:10.092 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:10.092 "is_configured": false, 00:19:10.092 "data_offset": 2048, 00:19:10.092 "data_size": 63488 00:19:10.092 }, 00:19:10.092 { 00:19:10.092 "name": null, 00:19:10.092 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:10.092 "is_configured": false, 00:19:10.092 "data_offset": 2048, 00:19:10.092 "data_size": 63488 00:19:10.092 }, 00:19:10.092 { 00:19:10.092 "name": "BaseBdev3", 00:19:10.092 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:10.092 "is_configured": true, 00:19:10.092 "data_offset": 2048, 00:19:10.092 "data_size": 63488 00:19:10.092 } 00:19:10.092 ] 00:19:10.092 }' 00:19:10.092 16:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.092 16:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.660 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.660 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:10.919 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:10.919 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:11.178 [2024-07-24 16:36:07.889786] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.178 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.179 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.179 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.179 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.179 16:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.437 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.437 "name": "Existed_Raid", 00:19:11.437 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:11.437 "strip_size_kb": 64, 00:19:11.437 "state": "configuring", 00:19:11.437 "raid_level": "concat", 00:19:11.437 "superblock": true, 00:19:11.437 "num_base_bdevs": 3, 00:19:11.437 "num_base_bdevs_discovered": 2, 00:19:11.437 "num_base_bdevs_operational": 3, 00:19:11.437 "base_bdevs_list": [ 00:19:11.437 { 00:19:11.437 "name": null, 00:19:11.437 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:11.437 "is_configured": false, 00:19:11.438 "data_offset": 2048, 00:19:11.438 "data_size": 63488 00:19:11.438 }, 00:19:11.438 { 00:19:11.438 "name": "BaseBdev2", 00:19:11.438 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:11.438 "is_configured": true, 00:19:11.438 "data_offset": 2048, 00:19:11.438 "data_size": 63488 00:19:11.438 }, 00:19:11.438 { 00:19:11.438 "name": "BaseBdev3", 00:19:11.438 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:11.438 "is_configured": true, 00:19:11.438 "data_offset": 2048, 00:19:11.438 "data_size": 63488 00:19:11.438 } 00:19:11.438 ] 00:19:11.438 }' 00:19:11.438 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.438 16:36:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.004 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.004 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:12.263 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:12.263 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.263 16:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:12.522 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 32f10311-e80c-4698-9d0d-fd4ecc8fd8d7 00:19:12.780 [2024-07-24 16:36:09.413125] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:12.780 [2024-07-24 16:36:09.413383] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:19:12.780 [2024-07-24 16:36:09.413405] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:12.780 [2024-07-24 16:36:09.413703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:19:12.780 [2024-07-24 16:36:09.413918] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:19:12.780 [2024-07-24 16:36:09.413932] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:19:12.780 NewBaseBdev 00:19:12.780 [2024-07-24 16:36:09.414108] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:12.780 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.038 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:13.038 [ 00:19:13.038 { 00:19:13.038 "name": "NewBaseBdev", 00:19:13.038 "aliases": [ 00:19:13.038 "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7" 00:19:13.038 ], 00:19:13.038 "product_name": "Malloc disk", 00:19:13.038 "block_size": 512, 00:19:13.038 "num_blocks": 65536, 00:19:13.038 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:13.038 "assigned_rate_limits": { 00:19:13.038 "rw_ios_per_sec": 0, 00:19:13.038 "rw_mbytes_per_sec": 0, 00:19:13.038 "r_mbytes_per_sec": 0, 00:19:13.038 "w_mbytes_per_sec": 0 00:19:13.038 }, 00:19:13.038 "claimed": true, 00:19:13.038 "claim_type": "exclusive_write", 00:19:13.038 "zoned": false, 00:19:13.038 "supported_io_types": { 00:19:13.038 "read": true, 00:19:13.038 "write": true, 00:19:13.038 "unmap": true, 00:19:13.038 "flush": true, 00:19:13.038 "reset": true, 00:19:13.038 "nvme_admin": false, 00:19:13.038 "nvme_io": false, 00:19:13.038 "nvme_io_md": false, 00:19:13.038 "write_zeroes": true, 00:19:13.038 "zcopy": true, 00:19:13.038 "get_zone_info": false, 00:19:13.038 "zone_management": false, 00:19:13.038 "zone_append": false, 00:19:13.038 "compare": false, 00:19:13.038 "compare_and_write": false, 00:19:13.038 "abort": true, 00:19:13.038 "seek_hole": false, 00:19:13.038 "seek_data": false, 00:19:13.038 "copy": true, 00:19:13.038 "nvme_iov_md": false 00:19:13.038 }, 00:19:13.038 "memory_domains": [ 00:19:13.038 { 00:19:13.038 "dma_device_id": "system", 00:19:13.038 "dma_device_type": 1 00:19:13.038 }, 00:19:13.038 { 00:19:13.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.038 "dma_device_type": 2 00:19:13.038 } 00:19:13.038 ], 00:19:13.038 "driver_specific": {} 00:19:13.038 } 00:19:13.038 ] 00:19:13.038 16:36:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:13.038 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:13.038 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.038 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.038 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.039 16:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.297 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.297 "name": "Existed_Raid", 00:19:13.297 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:13.297 "strip_size_kb": 64, 00:19:13.297 "state": "online", 00:19:13.297 "raid_level": "concat", 00:19:13.297 "superblock": true, 00:19:13.297 "num_base_bdevs": 3, 00:19:13.297 "num_base_bdevs_discovered": 3, 00:19:13.297 "num_base_bdevs_operational": 3, 00:19:13.297 "base_bdevs_list": [ 00:19:13.297 { 00:19:13.297 "name": "NewBaseBdev", 00:19:13.297 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:13.297 "is_configured": true, 00:19:13.297 "data_offset": 2048, 00:19:13.297 "data_size": 63488 00:19:13.297 }, 00:19:13.297 { 00:19:13.297 "name": "BaseBdev2", 00:19:13.297 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:13.297 "is_configured": true, 00:19:13.297 "data_offset": 2048, 00:19:13.297 "data_size": 63488 00:19:13.297 }, 00:19:13.297 { 00:19:13.297 "name": "BaseBdev3", 00:19:13.297 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:13.297 "is_configured": true, 00:19:13.297 "data_offset": 2048, 00:19:13.297 "data_size": 63488 00:19:13.297 } 00:19:13.297 ] 00:19:13.297 }' 00:19:13.297 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.297 16:36:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:13.864 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:14.123 [2024-07-24 16:36:10.885557] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.123 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:14.123 "name": "Existed_Raid", 00:19:14.123 "aliases": [ 00:19:14.123 "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499" 00:19:14.123 ], 00:19:14.123 "product_name": "Raid Volume", 00:19:14.123 "block_size": 512, 00:19:14.123 "num_blocks": 190464, 00:19:14.123 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:14.123 "assigned_rate_limits": { 00:19:14.123 "rw_ios_per_sec": 0, 00:19:14.123 "rw_mbytes_per_sec": 0, 00:19:14.123 "r_mbytes_per_sec": 0, 00:19:14.123 "w_mbytes_per_sec": 0 00:19:14.123 }, 00:19:14.123 "claimed": false, 00:19:14.123 "zoned": false, 00:19:14.123 "supported_io_types": { 00:19:14.123 "read": true, 00:19:14.123 "write": true, 00:19:14.123 "unmap": true, 00:19:14.123 "flush": true, 00:19:14.123 "reset": true, 00:19:14.123 "nvme_admin": false, 00:19:14.123 "nvme_io": false, 00:19:14.123 "nvme_io_md": false, 00:19:14.123 "write_zeroes": true, 00:19:14.123 "zcopy": false, 00:19:14.123 "get_zone_info": false, 00:19:14.123 "zone_management": false, 00:19:14.123 "zone_append": false, 00:19:14.123 "compare": false, 00:19:14.123 "compare_and_write": false, 00:19:14.123 "abort": false, 00:19:14.123 "seek_hole": false, 00:19:14.123 "seek_data": false, 00:19:14.123 "copy": false, 00:19:14.123 "nvme_iov_md": false 00:19:14.123 }, 00:19:14.123 "memory_domains": [ 00:19:14.123 { 00:19:14.123 "dma_device_id": "system", 00:19:14.124 "dma_device_type": 1 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.124 "dma_device_type": 2 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "dma_device_id": "system", 00:19:14.124 "dma_device_type": 1 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.124 "dma_device_type": 2 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "dma_device_id": "system", 00:19:14.124 "dma_device_type": 1 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.124 "dma_device_type": 2 00:19:14.124 } 00:19:14.124 ], 00:19:14.124 "driver_specific": { 00:19:14.124 "raid": { 00:19:14.124 "uuid": "2fdbd9b1-eeff-4ff9-b6b4-b37b1fda8499", 00:19:14.124 "strip_size_kb": 64, 00:19:14.124 "state": "online", 00:19:14.124 "raid_level": "concat", 00:19:14.124 "superblock": true, 00:19:14.124 "num_base_bdevs": 3, 00:19:14.124 "num_base_bdevs_discovered": 3, 00:19:14.124 "num_base_bdevs_operational": 3, 00:19:14.124 "base_bdevs_list": [ 00:19:14.124 { 00:19:14.124 "name": "NewBaseBdev", 00:19:14.124 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:14.124 "is_configured": true, 00:19:14.124 "data_offset": 2048, 00:19:14.124 "data_size": 63488 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "name": "BaseBdev2", 00:19:14.124 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:14.124 "is_configured": true, 00:19:14.124 "data_offset": 2048, 00:19:14.124 "data_size": 63488 00:19:14.124 }, 00:19:14.124 { 00:19:14.124 "name": "BaseBdev3", 00:19:14.124 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:14.124 "is_configured": true, 00:19:14.124 "data_offset": 2048, 00:19:14.124 "data_size": 63488 00:19:14.124 } 00:19:14.124 ] 00:19:14.124 } 00:19:14.124 } 00:19:14.124 }' 00:19:14.124 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:14.124 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:14.124 BaseBdev2 00:19:14.124 BaseBdev3' 00:19:14.124 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:14.124 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:14.124 16:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:14.383 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:14.383 "name": "NewBaseBdev", 00:19:14.383 "aliases": [ 00:19:14.383 "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7" 00:19:14.383 ], 00:19:14.383 "product_name": "Malloc disk", 00:19:14.383 "block_size": 512, 00:19:14.383 "num_blocks": 65536, 00:19:14.383 "uuid": "32f10311-e80c-4698-9d0d-fd4ecc8fd8d7", 00:19:14.383 "assigned_rate_limits": { 00:19:14.383 "rw_ios_per_sec": 0, 00:19:14.383 "rw_mbytes_per_sec": 0, 00:19:14.383 "r_mbytes_per_sec": 0, 00:19:14.383 "w_mbytes_per_sec": 0 00:19:14.383 }, 00:19:14.383 "claimed": true, 00:19:14.383 "claim_type": "exclusive_write", 00:19:14.383 "zoned": false, 00:19:14.383 "supported_io_types": { 00:19:14.383 "read": true, 00:19:14.383 "write": true, 00:19:14.383 "unmap": true, 00:19:14.383 "flush": true, 00:19:14.383 "reset": true, 00:19:14.383 "nvme_admin": false, 00:19:14.383 "nvme_io": false, 00:19:14.383 "nvme_io_md": false, 00:19:14.383 "write_zeroes": true, 00:19:14.383 "zcopy": true, 00:19:14.383 "get_zone_info": false, 00:19:14.383 "zone_management": false, 00:19:14.383 "zone_append": false, 00:19:14.383 "compare": false, 00:19:14.383 "compare_and_write": false, 00:19:14.383 "abort": true, 00:19:14.383 "seek_hole": false, 00:19:14.383 "seek_data": false, 00:19:14.383 "copy": true, 00:19:14.383 "nvme_iov_md": false 00:19:14.383 }, 00:19:14.383 "memory_domains": [ 00:19:14.383 { 00:19:14.383 "dma_device_id": "system", 00:19:14.383 "dma_device_type": 1 00:19:14.383 }, 00:19:14.383 { 00:19:14.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.383 "dma_device_type": 2 00:19:14.383 } 00:19:14.383 ], 00:19:14.383 "driver_specific": {} 00:19:14.383 }' 00:19:14.383 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:14.383 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.642 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.901 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:14.901 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:14.901 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:14.901 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:14.901 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:14.901 "name": "BaseBdev2", 00:19:14.901 "aliases": [ 00:19:14.901 "dd52103f-9827-4354-9c68-1cbe8d0d4ea7" 00:19:14.901 ], 00:19:14.901 "product_name": "Malloc disk", 00:19:14.901 "block_size": 512, 00:19:14.901 "num_blocks": 65536, 00:19:14.901 "uuid": "dd52103f-9827-4354-9c68-1cbe8d0d4ea7", 00:19:14.901 "assigned_rate_limits": { 00:19:14.901 "rw_ios_per_sec": 0, 00:19:14.901 "rw_mbytes_per_sec": 0, 00:19:14.901 "r_mbytes_per_sec": 0, 00:19:14.901 "w_mbytes_per_sec": 0 00:19:14.901 }, 00:19:14.901 "claimed": true, 00:19:14.901 "claim_type": "exclusive_write", 00:19:14.901 "zoned": false, 00:19:14.901 "supported_io_types": { 00:19:14.901 "read": true, 00:19:14.901 "write": true, 00:19:14.901 "unmap": true, 00:19:14.901 "flush": true, 00:19:14.901 "reset": true, 00:19:14.901 "nvme_admin": false, 00:19:14.901 "nvme_io": false, 00:19:14.901 "nvme_io_md": false, 00:19:14.901 "write_zeroes": true, 00:19:14.901 "zcopy": true, 00:19:14.901 "get_zone_info": false, 00:19:14.901 "zone_management": false, 00:19:14.901 "zone_append": false, 00:19:14.901 "compare": false, 00:19:14.901 "compare_and_write": false, 00:19:14.901 "abort": true, 00:19:14.901 "seek_hole": false, 00:19:14.901 "seek_data": false, 00:19:14.901 "copy": true, 00:19:14.901 "nvme_iov_md": false 00:19:14.901 }, 00:19:14.901 "memory_domains": [ 00:19:14.901 { 00:19:14.901 "dma_device_id": "system", 00:19:14.901 "dma_device_type": 1 00:19:14.901 }, 00:19:14.901 { 00:19:14.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.901 "dma_device_type": 2 00:19:14.901 } 00:19:14.901 ], 00:19:14.901 "driver_specific": {} 00:19:14.901 }' 00:19:14.901 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.159 16:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.419 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.419 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:15.419 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:15.419 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:15.419 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:15.677 "name": "BaseBdev3", 00:19:15.677 "aliases": [ 00:19:15.677 "07fb47c4-72e1-4a5a-8e4f-689107db1fc4" 00:19:15.677 ], 00:19:15.677 "product_name": "Malloc disk", 00:19:15.677 "block_size": 512, 00:19:15.677 "num_blocks": 65536, 00:19:15.677 "uuid": "07fb47c4-72e1-4a5a-8e4f-689107db1fc4", 00:19:15.677 "assigned_rate_limits": { 00:19:15.677 "rw_ios_per_sec": 0, 00:19:15.677 "rw_mbytes_per_sec": 0, 00:19:15.677 "r_mbytes_per_sec": 0, 00:19:15.677 "w_mbytes_per_sec": 0 00:19:15.677 }, 00:19:15.677 "claimed": true, 00:19:15.677 "claim_type": "exclusive_write", 00:19:15.677 "zoned": false, 00:19:15.677 "supported_io_types": { 00:19:15.677 "read": true, 00:19:15.677 "write": true, 00:19:15.677 "unmap": true, 00:19:15.677 "flush": true, 00:19:15.677 "reset": true, 00:19:15.677 "nvme_admin": false, 00:19:15.677 "nvme_io": false, 00:19:15.677 "nvme_io_md": false, 00:19:15.677 "write_zeroes": true, 00:19:15.677 "zcopy": true, 00:19:15.677 "get_zone_info": false, 00:19:15.677 "zone_management": false, 00:19:15.677 "zone_append": false, 00:19:15.677 "compare": false, 00:19:15.677 "compare_and_write": false, 00:19:15.677 "abort": true, 00:19:15.677 "seek_hole": false, 00:19:15.677 "seek_data": false, 00:19:15.677 "copy": true, 00:19:15.677 "nvme_iov_md": false 00:19:15.677 }, 00:19:15.677 "memory_domains": [ 00:19:15.677 { 00:19:15.677 "dma_device_id": "system", 00:19:15.677 "dma_device_type": 1 00:19:15.677 }, 00:19:15.677 { 00:19:15.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.677 "dma_device_type": 2 00:19:15.677 } 00:19:15.677 ], 00:19:15.677 "driver_specific": {} 00:19:15.677 }' 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.677 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.935 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.935 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.935 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.935 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:15.935 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:16.194 [2024-07-24 16:36:12.826781] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:16.194 [2024-07-24 16:36:12.826813] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.194 [2024-07-24 16:36:12.826893] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:16.194 [2024-07-24 16:36:12.826959] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:16.194 [2024-07-24 16:36:12.826981] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1648214 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1648214 ']' 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1648214 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1648214 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1648214' 00:19:16.194 killing process with pid 1648214 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1648214 00:19:16.194 [2024-07-24 16:36:12.901467] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.194 16:36:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1648214 00:19:16.453 [2024-07-24 16:36:13.217853] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.348 16:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:18.348 00:19:18.348 real 0m29.452s 00:19:18.348 user 0m51.351s 00:19:18.348 sys 0m5.187s 00:19:18.348 16:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:18.348 16:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.348 ************************************ 00:19:18.348 END TEST raid_state_function_test_sb 00:19:18.348 ************************************ 00:19:18.348 16:36:15 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:18.348 16:36:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:18.348 16:36:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:18.348 16:36:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:18.348 ************************************ 00:19:18.348 START TEST raid_superblock_test 00:19:18.348 ************************************ 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1654180 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1654180 /var/tmp/spdk-raid.sock 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1654180 ']' 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.348 16:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.348 [2024-07-24 16:36:15.145151] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:19:18.349 [2024-07-24 16:36:15.145270] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654180 ] 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:18.608 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:18.608 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:18.608 [2024-07-24 16:36:15.371548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.867 [2024-07-24 16:36:15.660181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.433 [2024-07-24 16:36:15.987878] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.433 [2024-07-24 16:36:15.987913] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.433 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:19.691 malloc1 00:19:19.691 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.950 [2024-07-24 16:36:16.660946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.950 [2024-07-24 16:36:16.661010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.950 [2024-07-24 16:36:16.661041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:19:19.950 [2024-07-24 16:36:16.661057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.950 [2024-07-24 16:36:16.663791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.950 [2024-07-24 16:36:16.663825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.950 pt1 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.950 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:20.207 malloc2 00:19:20.208 16:36:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.465 [2024-07-24 16:36:17.167488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.465 [2024-07-24 16:36:17.167551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.465 [2024-07-24 16:36:17.167580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:19:20.465 [2024-07-24 16:36:17.167595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.465 [2024-07-24 16:36:17.170387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.465 [2024-07-24 16:36:17.170426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.465 pt2 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:20.465 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:20.722 malloc3 00:19:20.722 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:20.979 [2024-07-24 16:36:17.679360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:20.979 [2024-07-24 16:36:17.679424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.979 [2024-07-24 16:36:17.679454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:19:20.979 [2024-07-24 16:36:17.679469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.979 [2024-07-24 16:36:17.682207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.979 [2024-07-24 16:36:17.682241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:20.979 pt3 00:19:20.979 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:20.979 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:20.979 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:21.281 [2024-07-24 16:36:17.904017] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:21.281 [2024-07-24 16:36:17.906350] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:21.281 [2024-07-24 16:36:17.906431] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:21.281 [2024-07-24 16:36:17.906655] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:19:21.281 [2024-07-24 16:36:17.906681] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:21.281 [2024-07-24 16:36:17.907026] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:21.281 [2024-07-24 16:36:17.907273] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:19:21.281 [2024-07-24 16:36:17.907289] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:19:21.281 [2024-07-24 16:36:17.907495] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.281 16:36:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.564 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.564 "name": "raid_bdev1", 00:19:21.564 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:21.564 "strip_size_kb": 64, 00:19:21.564 "state": "online", 00:19:21.564 "raid_level": "concat", 00:19:21.564 "superblock": true, 00:19:21.564 "num_base_bdevs": 3, 00:19:21.564 "num_base_bdevs_discovered": 3, 00:19:21.564 "num_base_bdevs_operational": 3, 00:19:21.564 "base_bdevs_list": [ 00:19:21.564 { 00:19:21.564 "name": "pt1", 00:19:21.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.564 "is_configured": true, 00:19:21.564 "data_offset": 2048, 00:19:21.564 "data_size": 63488 00:19:21.564 }, 00:19:21.564 { 00:19:21.564 "name": "pt2", 00:19:21.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.564 "is_configured": true, 00:19:21.564 "data_offset": 2048, 00:19:21.564 "data_size": 63488 00:19:21.564 }, 00:19:21.564 { 00:19:21.564 "name": "pt3", 00:19:21.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.564 "is_configured": true, 00:19:21.564 "data_offset": 2048, 00:19:21.564 "data_size": 63488 00:19:21.564 } 00:19:21.564 ] 00:19:21.564 }' 00:19:21.564 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.564 16:36:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:22.130 [2024-07-24 16:36:18.935216] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.130 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:22.130 "name": "raid_bdev1", 00:19:22.130 "aliases": [ 00:19:22.130 "1f48f0e4-6a34-4db1-8680-d845777ae91a" 00:19:22.130 ], 00:19:22.130 "product_name": "Raid Volume", 00:19:22.130 "block_size": 512, 00:19:22.130 "num_blocks": 190464, 00:19:22.130 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:22.130 "assigned_rate_limits": { 00:19:22.130 "rw_ios_per_sec": 0, 00:19:22.130 "rw_mbytes_per_sec": 0, 00:19:22.130 "r_mbytes_per_sec": 0, 00:19:22.130 "w_mbytes_per_sec": 0 00:19:22.130 }, 00:19:22.130 "claimed": false, 00:19:22.130 "zoned": false, 00:19:22.130 "supported_io_types": { 00:19:22.130 "read": true, 00:19:22.130 "write": true, 00:19:22.130 "unmap": true, 00:19:22.130 "flush": true, 00:19:22.130 "reset": true, 00:19:22.130 "nvme_admin": false, 00:19:22.130 "nvme_io": false, 00:19:22.130 "nvme_io_md": false, 00:19:22.130 "write_zeroes": true, 00:19:22.130 "zcopy": false, 00:19:22.130 "get_zone_info": false, 00:19:22.130 "zone_management": false, 00:19:22.130 "zone_append": false, 00:19:22.130 "compare": false, 00:19:22.130 "compare_and_write": false, 00:19:22.130 "abort": false, 00:19:22.130 "seek_hole": false, 00:19:22.130 "seek_data": false, 00:19:22.130 "copy": false, 00:19:22.130 "nvme_iov_md": false 00:19:22.130 }, 00:19:22.130 "memory_domains": [ 00:19:22.130 { 00:19:22.130 "dma_device_id": "system", 00:19:22.130 "dma_device_type": 1 00:19:22.130 }, 00:19:22.130 { 00:19:22.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.130 "dma_device_type": 2 00:19:22.130 }, 00:19:22.130 { 00:19:22.130 "dma_device_id": "system", 00:19:22.130 "dma_device_type": 1 00:19:22.130 }, 00:19:22.130 { 00:19:22.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.131 "dma_device_type": 2 00:19:22.131 }, 00:19:22.131 { 00:19:22.131 "dma_device_id": "system", 00:19:22.131 "dma_device_type": 1 00:19:22.131 }, 00:19:22.131 { 00:19:22.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.131 "dma_device_type": 2 00:19:22.131 } 00:19:22.131 ], 00:19:22.131 "driver_specific": { 00:19:22.131 "raid": { 00:19:22.131 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:22.131 "strip_size_kb": 64, 00:19:22.131 "state": "online", 00:19:22.131 "raid_level": "concat", 00:19:22.131 "superblock": true, 00:19:22.131 "num_base_bdevs": 3, 00:19:22.131 "num_base_bdevs_discovered": 3, 00:19:22.131 "num_base_bdevs_operational": 3, 00:19:22.131 "base_bdevs_list": [ 00:19:22.131 { 00:19:22.131 "name": "pt1", 00:19:22.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.131 "is_configured": true, 00:19:22.131 "data_offset": 2048, 00:19:22.131 "data_size": 63488 00:19:22.131 }, 00:19:22.131 { 00:19:22.131 "name": "pt2", 00:19:22.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.131 "is_configured": true, 00:19:22.131 "data_offset": 2048, 00:19:22.131 "data_size": 63488 00:19:22.131 }, 00:19:22.131 { 00:19:22.131 "name": "pt3", 00:19:22.131 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.131 "is_configured": true, 00:19:22.131 "data_offset": 2048, 00:19:22.131 "data_size": 63488 00:19:22.131 } 00:19:22.131 ] 00:19:22.131 } 00:19:22.131 } 00:19:22.131 }' 00:19:22.131 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:22.388 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:22.388 pt2 00:19:22.388 pt3' 00:19:22.389 16:36:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.389 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:22.389 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.389 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.389 "name": "pt1", 00:19:22.389 "aliases": [ 00:19:22.389 "00000000-0000-0000-0000-000000000001" 00:19:22.389 ], 00:19:22.389 "product_name": "passthru", 00:19:22.389 "block_size": 512, 00:19:22.389 "num_blocks": 65536, 00:19:22.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:22.389 "assigned_rate_limits": { 00:19:22.389 "rw_ios_per_sec": 0, 00:19:22.389 "rw_mbytes_per_sec": 0, 00:19:22.389 "r_mbytes_per_sec": 0, 00:19:22.389 "w_mbytes_per_sec": 0 00:19:22.389 }, 00:19:22.389 "claimed": true, 00:19:22.389 "claim_type": "exclusive_write", 00:19:22.389 "zoned": false, 00:19:22.389 "supported_io_types": { 00:19:22.389 "read": true, 00:19:22.389 "write": true, 00:19:22.389 "unmap": true, 00:19:22.389 "flush": true, 00:19:22.389 "reset": true, 00:19:22.389 "nvme_admin": false, 00:19:22.389 "nvme_io": false, 00:19:22.389 "nvme_io_md": false, 00:19:22.389 "write_zeroes": true, 00:19:22.389 "zcopy": true, 00:19:22.389 "get_zone_info": false, 00:19:22.389 "zone_management": false, 00:19:22.389 "zone_append": false, 00:19:22.389 "compare": false, 00:19:22.389 "compare_and_write": false, 00:19:22.389 "abort": true, 00:19:22.389 "seek_hole": false, 00:19:22.389 "seek_data": false, 00:19:22.389 "copy": true, 00:19:22.389 "nvme_iov_md": false 00:19:22.389 }, 00:19:22.389 "memory_domains": [ 00:19:22.389 { 00:19:22.389 "dma_device_id": "system", 00:19:22.389 "dma_device_type": 1 00:19:22.389 }, 00:19:22.389 { 00:19:22.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.389 "dma_device_type": 2 00:19:22.389 } 00:19:22.389 ], 00:19:22.389 "driver_specific": { 00:19:22.389 "passthru": { 00:19:22.389 "name": "pt1", 00:19:22.389 "base_bdev_name": "malloc1" 00:19:22.389 } 00:19:22.389 } 00:19:22.389 }' 00:19:22.389 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.646 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.647 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.905 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.905 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.905 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.905 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:22.905 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:23.163 "name": "pt2", 00:19:23.163 "aliases": [ 00:19:23.163 "00000000-0000-0000-0000-000000000002" 00:19:23.163 ], 00:19:23.163 "product_name": "passthru", 00:19:23.163 "block_size": 512, 00:19:23.163 "num_blocks": 65536, 00:19:23.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.163 "assigned_rate_limits": { 00:19:23.163 "rw_ios_per_sec": 0, 00:19:23.163 "rw_mbytes_per_sec": 0, 00:19:23.163 "r_mbytes_per_sec": 0, 00:19:23.163 "w_mbytes_per_sec": 0 00:19:23.163 }, 00:19:23.163 "claimed": true, 00:19:23.163 "claim_type": "exclusive_write", 00:19:23.163 "zoned": false, 00:19:23.163 "supported_io_types": { 00:19:23.163 "read": true, 00:19:23.163 "write": true, 00:19:23.163 "unmap": true, 00:19:23.163 "flush": true, 00:19:23.163 "reset": true, 00:19:23.163 "nvme_admin": false, 00:19:23.163 "nvme_io": false, 00:19:23.163 "nvme_io_md": false, 00:19:23.163 "write_zeroes": true, 00:19:23.163 "zcopy": true, 00:19:23.163 "get_zone_info": false, 00:19:23.163 "zone_management": false, 00:19:23.163 "zone_append": false, 00:19:23.163 "compare": false, 00:19:23.163 "compare_and_write": false, 00:19:23.163 "abort": true, 00:19:23.163 "seek_hole": false, 00:19:23.163 "seek_data": false, 00:19:23.163 "copy": true, 00:19:23.163 "nvme_iov_md": false 00:19:23.163 }, 00:19:23.163 "memory_domains": [ 00:19:23.163 { 00:19:23.163 "dma_device_id": "system", 00:19:23.163 "dma_device_type": 1 00:19:23.163 }, 00:19:23.163 { 00:19:23.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.163 "dma_device_type": 2 00:19:23.163 } 00:19:23.163 ], 00:19:23.163 "driver_specific": { 00:19:23.163 "passthru": { 00:19:23.163 "name": "pt2", 00:19:23.163 "base_bdev_name": "malloc2" 00:19:23.163 } 00:19:23.163 } 00:19:23.163 }' 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:23.163 16:36:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.163 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:23.421 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:23.678 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:23.678 "name": "pt3", 00:19:23.678 "aliases": [ 00:19:23.678 "00000000-0000-0000-0000-000000000003" 00:19:23.678 ], 00:19:23.678 "product_name": "passthru", 00:19:23.678 "block_size": 512, 00:19:23.678 "num_blocks": 65536, 00:19:23.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.678 "assigned_rate_limits": { 00:19:23.678 "rw_ios_per_sec": 0, 00:19:23.678 "rw_mbytes_per_sec": 0, 00:19:23.678 "r_mbytes_per_sec": 0, 00:19:23.678 "w_mbytes_per_sec": 0 00:19:23.678 }, 00:19:23.678 "claimed": true, 00:19:23.678 "claim_type": "exclusive_write", 00:19:23.678 "zoned": false, 00:19:23.678 "supported_io_types": { 00:19:23.678 "read": true, 00:19:23.678 "write": true, 00:19:23.678 "unmap": true, 00:19:23.678 "flush": true, 00:19:23.678 "reset": true, 00:19:23.678 "nvme_admin": false, 00:19:23.678 "nvme_io": false, 00:19:23.678 "nvme_io_md": false, 00:19:23.678 "write_zeroes": true, 00:19:23.678 "zcopy": true, 00:19:23.678 "get_zone_info": false, 00:19:23.678 "zone_management": false, 00:19:23.678 "zone_append": false, 00:19:23.678 "compare": false, 00:19:23.678 "compare_and_write": false, 00:19:23.678 "abort": true, 00:19:23.678 "seek_hole": false, 00:19:23.678 "seek_data": false, 00:19:23.678 "copy": true, 00:19:23.678 "nvme_iov_md": false 00:19:23.678 }, 00:19:23.678 "memory_domains": [ 00:19:23.678 { 00:19:23.678 "dma_device_id": "system", 00:19:23.678 "dma_device_type": 1 00:19:23.678 }, 00:19:23.678 { 00:19:23.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.678 "dma_device_type": 2 00:19:23.678 } 00:19:23.678 ], 00:19:23.678 "driver_specific": { 00:19:23.678 "passthru": { 00:19:23.678 "name": "pt3", 00:19:23.678 "base_bdev_name": "malloc3" 00:19:23.678 } 00:19:23.678 } 00:19:23.678 }' 00:19:23.678 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.678 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.678 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:23.678 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.678 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:23.936 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:19:24.193 [2024-07-24 16:36:20.920603] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.193 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=1f48f0e4-6a34-4db1-8680-d845777ae91a 00:19:24.193 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 1f48f0e4-6a34-4db1-8680-d845777ae91a ']' 00:19:24.193 16:36:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:24.450 [2024-07-24 16:36:21.148823] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.450 [2024-07-24 16:36:21.148858] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.450 [2024-07-24 16:36:21.148943] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.450 [2024-07-24 16:36:21.149015] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.450 [2024-07-24 16:36:21.149032] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:19:24.450 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.450 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:19:24.707 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:19:24.707 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:19:24.707 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.707 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:24.965 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.965 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:25.223 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.223 16:36:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:25.223 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:25.223 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:25.482 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:25.742 [2024-07-24 16:36:22.512429] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:25.742 [2024-07-24 16:36:22.514744] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:25.742 [2024-07-24 16:36:22.514808] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:25.742 [2024-07-24 16:36:22.514869] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:25.742 [2024-07-24 16:36:22.514927] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:25.742 [2024-07-24 16:36:22.514957] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:25.742 [2024-07-24 16:36:22.514982] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.742 [2024-07-24 16:36:22.514996] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:19:25.742 request: 00:19:25.742 { 00:19:25.742 "name": "raid_bdev1", 00:19:25.742 "raid_level": "concat", 00:19:25.742 "base_bdevs": [ 00:19:25.742 "malloc1", 00:19:25.742 "malloc2", 00:19:25.742 "malloc3" 00:19:25.742 ], 00:19:25.742 "strip_size_kb": 64, 00:19:25.742 "superblock": false, 00:19:25.742 "method": "bdev_raid_create", 00:19:25.742 "req_id": 1 00:19:25.742 } 00:19:25.742 Got JSON-RPC error response 00:19:25.742 response: 00:19:25.742 { 00:19:25.742 "code": -17, 00:19:25.742 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:25.742 } 00:19:25.742 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:25.742 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.742 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.742 16:36:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.742 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.742 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:19:26.000 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:19:26.000 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:19:26.000 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.258 [2024-07-24 16:36:22.965603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.258 [2024-07-24 16:36:22.965671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.258 [2024-07-24 16:36:22.965703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:19:26.258 [2024-07-24 16:36:22.965718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.258 [2024-07-24 16:36:22.968515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.258 [2024-07-24 16:36:22.968551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.258 [2024-07-24 16:36:22.968655] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:26.258 [2024-07-24 16:36:22.968718] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:26.258 pt1 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.258 16:36:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.517 16:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.517 "name": "raid_bdev1", 00:19:26.517 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:26.517 "strip_size_kb": 64, 00:19:26.517 "state": "configuring", 00:19:26.517 "raid_level": "concat", 00:19:26.517 "superblock": true, 00:19:26.517 "num_base_bdevs": 3, 00:19:26.517 "num_base_bdevs_discovered": 1, 00:19:26.517 "num_base_bdevs_operational": 3, 00:19:26.517 "base_bdevs_list": [ 00:19:26.517 { 00:19:26.517 "name": "pt1", 00:19:26.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:26.517 "is_configured": true, 00:19:26.517 "data_offset": 2048, 00:19:26.517 "data_size": 63488 00:19:26.517 }, 00:19:26.517 { 00:19:26.517 "name": null, 00:19:26.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.517 "is_configured": false, 00:19:26.517 "data_offset": 2048, 00:19:26.517 "data_size": 63488 00:19:26.517 }, 00:19:26.517 { 00:19:26.517 "name": null, 00:19:26.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.517 "is_configured": false, 00:19:26.517 "data_offset": 2048, 00:19:26.517 "data_size": 63488 00:19:26.517 } 00:19:26.517 ] 00:19:26.517 }' 00:19:26.517 16:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.517 16:36:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.079 16:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:19:27.079 16:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.337 [2024-07-24 16:36:23.980331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.337 [2024-07-24 16:36:23.980398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.337 [2024-07-24 16:36:23.980427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:19:27.338 [2024-07-24 16:36:23.980442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.338 [2024-07-24 16:36:23.981009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.338 [2024-07-24 16:36:23.981033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.338 [2024-07-24 16:36:23.981123] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:27.338 [2024-07-24 16:36:23.981163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:27.338 pt2 00:19:27.338 16:36:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:27.595 [2024-07-24 16:36:24.204995] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.595 "name": "raid_bdev1", 00:19:27.595 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:27.595 "strip_size_kb": 64, 00:19:27.595 "state": "configuring", 00:19:27.595 "raid_level": "concat", 00:19:27.595 "superblock": true, 00:19:27.595 "num_base_bdevs": 3, 00:19:27.595 "num_base_bdevs_discovered": 1, 00:19:27.595 "num_base_bdevs_operational": 3, 00:19:27.595 "base_bdevs_list": [ 00:19:27.595 { 00:19:27.595 "name": "pt1", 00:19:27.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.595 "is_configured": true, 00:19:27.595 "data_offset": 2048, 00:19:27.595 "data_size": 63488 00:19:27.595 }, 00:19:27.595 { 00:19:27.595 "name": null, 00:19:27.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.595 "is_configured": false, 00:19:27.595 "data_offset": 2048, 00:19:27.595 "data_size": 63488 00:19:27.595 }, 00:19:27.595 { 00:19:27.595 "name": null, 00:19:27.595 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.595 "is_configured": false, 00:19:27.595 "data_offset": 2048, 00:19:27.595 "data_size": 63488 00:19:27.595 } 00:19:27.595 ] 00:19:27.595 }' 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.595 16:36:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.529 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:28.529 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:28.529 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.529 [2024-07-24 16:36:25.239781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.529 [2024-07-24 16:36:25.239850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.529 [2024-07-24 16:36:25.239875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:19:28.529 [2024-07-24 16:36:25.239893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.529 [2024-07-24 16:36:25.240461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.529 [2024-07-24 16:36:25.240491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.529 [2024-07-24 16:36:25.240581] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:28.529 [2024-07-24 16:36:25.240610] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.529 pt2 00:19:28.529 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:28.529 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:28.529 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:28.788 [2024-07-24 16:36:25.464362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:28.788 [2024-07-24 16:36:25.464428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.788 [2024-07-24 16:36:25.464452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:19:28.788 [2024-07-24 16:36:25.464470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.788 [2024-07-24 16:36:25.465039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.788 [2024-07-24 16:36:25.465068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:28.788 [2024-07-24 16:36:25.465171] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:28.788 [2024-07-24 16:36:25.465207] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.788 [2024-07-24 16:36:25.465381] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:19:28.788 [2024-07-24 16:36:25.465398] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:28.788 [2024-07-24 16:36:25.465690] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:19:28.788 [2024-07-24 16:36:25.465913] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:19:28.788 [2024-07-24 16:36:25.465927] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:19:28.788 [2024-07-24 16:36:25.466123] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.788 pt3 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.788 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.047 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.047 "name": "raid_bdev1", 00:19:29.047 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:29.047 "strip_size_kb": 64, 00:19:29.047 "state": "online", 00:19:29.047 "raid_level": "concat", 00:19:29.047 "superblock": true, 00:19:29.047 "num_base_bdevs": 3, 00:19:29.047 "num_base_bdevs_discovered": 3, 00:19:29.047 "num_base_bdevs_operational": 3, 00:19:29.047 "base_bdevs_list": [ 00:19:29.047 { 00:19:29.047 "name": "pt1", 00:19:29.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.047 "is_configured": true, 00:19:29.047 "data_offset": 2048, 00:19:29.047 "data_size": 63488 00:19:29.047 }, 00:19:29.047 { 00:19:29.047 "name": "pt2", 00:19:29.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.047 "is_configured": true, 00:19:29.047 "data_offset": 2048, 00:19:29.047 "data_size": 63488 00:19:29.047 }, 00:19:29.047 { 00:19:29.047 "name": "pt3", 00:19:29.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.047 "is_configured": true, 00:19:29.047 "data_offset": 2048, 00:19:29.047 "data_size": 63488 00:19:29.047 } 00:19:29.047 ] 00:19:29.047 }' 00:19:29.047 16:36:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.047 16:36:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:29.614 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:29.872 [2024-07-24 16:36:26.503531] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:29.872 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:29.872 "name": "raid_bdev1", 00:19:29.872 "aliases": [ 00:19:29.872 "1f48f0e4-6a34-4db1-8680-d845777ae91a" 00:19:29.872 ], 00:19:29.872 "product_name": "Raid Volume", 00:19:29.872 "block_size": 512, 00:19:29.872 "num_blocks": 190464, 00:19:29.872 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:29.872 "assigned_rate_limits": { 00:19:29.872 "rw_ios_per_sec": 0, 00:19:29.872 "rw_mbytes_per_sec": 0, 00:19:29.872 "r_mbytes_per_sec": 0, 00:19:29.872 "w_mbytes_per_sec": 0 00:19:29.872 }, 00:19:29.872 "claimed": false, 00:19:29.872 "zoned": false, 00:19:29.872 "supported_io_types": { 00:19:29.872 "read": true, 00:19:29.872 "write": true, 00:19:29.872 "unmap": true, 00:19:29.872 "flush": true, 00:19:29.872 "reset": true, 00:19:29.872 "nvme_admin": false, 00:19:29.872 "nvme_io": false, 00:19:29.872 "nvme_io_md": false, 00:19:29.872 "write_zeroes": true, 00:19:29.872 "zcopy": false, 00:19:29.872 "get_zone_info": false, 00:19:29.872 "zone_management": false, 00:19:29.872 "zone_append": false, 00:19:29.872 "compare": false, 00:19:29.872 "compare_and_write": false, 00:19:29.872 "abort": false, 00:19:29.872 "seek_hole": false, 00:19:29.872 "seek_data": false, 00:19:29.872 "copy": false, 00:19:29.872 "nvme_iov_md": false 00:19:29.872 }, 00:19:29.872 "memory_domains": [ 00:19:29.872 { 00:19:29.872 "dma_device_id": "system", 00:19:29.872 "dma_device_type": 1 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.872 "dma_device_type": 2 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "dma_device_id": "system", 00:19:29.872 "dma_device_type": 1 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.872 "dma_device_type": 2 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "dma_device_id": "system", 00:19:29.872 "dma_device_type": 1 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.872 "dma_device_type": 2 00:19:29.872 } 00:19:29.872 ], 00:19:29.872 "driver_specific": { 00:19:29.872 "raid": { 00:19:29.872 "uuid": "1f48f0e4-6a34-4db1-8680-d845777ae91a", 00:19:29.872 "strip_size_kb": 64, 00:19:29.872 "state": "online", 00:19:29.872 "raid_level": "concat", 00:19:29.872 "superblock": true, 00:19:29.872 "num_base_bdevs": 3, 00:19:29.872 "num_base_bdevs_discovered": 3, 00:19:29.872 "num_base_bdevs_operational": 3, 00:19:29.872 "base_bdevs_list": [ 00:19:29.872 { 00:19:29.872 "name": "pt1", 00:19:29.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.872 "is_configured": true, 00:19:29.872 "data_offset": 2048, 00:19:29.872 "data_size": 63488 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "name": "pt2", 00:19:29.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.872 "is_configured": true, 00:19:29.872 "data_offset": 2048, 00:19:29.872 "data_size": 63488 00:19:29.872 }, 00:19:29.872 { 00:19:29.872 "name": "pt3", 00:19:29.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.872 "is_configured": true, 00:19:29.872 "data_offset": 2048, 00:19:29.873 "data_size": 63488 00:19:29.873 } 00:19:29.873 ] 00:19:29.873 } 00:19:29.873 } 00:19:29.873 }' 00:19:29.873 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:29.873 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:29.873 pt2 00:19:29.873 pt3' 00:19:29.873 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:29.873 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:29.873 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.131 "name": "pt1", 00:19:30.131 "aliases": [ 00:19:30.131 "00000000-0000-0000-0000-000000000001" 00:19:30.131 ], 00:19:30.131 "product_name": "passthru", 00:19:30.131 "block_size": 512, 00:19:30.131 "num_blocks": 65536, 00:19:30.131 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.131 "assigned_rate_limits": { 00:19:30.131 "rw_ios_per_sec": 0, 00:19:30.131 "rw_mbytes_per_sec": 0, 00:19:30.131 "r_mbytes_per_sec": 0, 00:19:30.131 "w_mbytes_per_sec": 0 00:19:30.131 }, 00:19:30.131 "claimed": true, 00:19:30.131 "claim_type": "exclusive_write", 00:19:30.131 "zoned": false, 00:19:30.131 "supported_io_types": { 00:19:30.131 "read": true, 00:19:30.131 "write": true, 00:19:30.131 "unmap": true, 00:19:30.131 "flush": true, 00:19:30.131 "reset": true, 00:19:30.131 "nvme_admin": false, 00:19:30.131 "nvme_io": false, 00:19:30.131 "nvme_io_md": false, 00:19:30.131 "write_zeroes": true, 00:19:30.131 "zcopy": true, 00:19:30.131 "get_zone_info": false, 00:19:30.131 "zone_management": false, 00:19:30.131 "zone_append": false, 00:19:30.131 "compare": false, 00:19:30.131 "compare_and_write": false, 00:19:30.131 "abort": true, 00:19:30.131 "seek_hole": false, 00:19:30.131 "seek_data": false, 00:19:30.131 "copy": true, 00:19:30.131 "nvme_iov_md": false 00:19:30.131 }, 00:19:30.131 "memory_domains": [ 00:19:30.131 { 00:19:30.131 "dma_device_id": "system", 00:19:30.131 "dma_device_type": 1 00:19:30.131 }, 00:19:30.131 { 00:19:30.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.131 "dma_device_type": 2 00:19:30.131 } 00:19:30.131 ], 00:19:30.131 "driver_specific": { 00:19:30.131 "passthru": { 00:19:30.131 "name": "pt1", 00:19:30.131 "base_bdev_name": "malloc1" 00:19:30.131 } 00:19:30.131 } 00:19:30.131 }' 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.131 16:36:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:30.390 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.648 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.648 "name": "pt2", 00:19:30.648 "aliases": [ 00:19:30.648 "00000000-0000-0000-0000-000000000002" 00:19:30.648 ], 00:19:30.648 "product_name": "passthru", 00:19:30.648 "block_size": 512, 00:19:30.648 "num_blocks": 65536, 00:19:30.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.648 "assigned_rate_limits": { 00:19:30.648 "rw_ios_per_sec": 0, 00:19:30.648 "rw_mbytes_per_sec": 0, 00:19:30.648 "r_mbytes_per_sec": 0, 00:19:30.648 "w_mbytes_per_sec": 0 00:19:30.648 }, 00:19:30.648 "claimed": true, 00:19:30.648 "claim_type": "exclusive_write", 00:19:30.648 "zoned": false, 00:19:30.648 "supported_io_types": { 00:19:30.648 "read": true, 00:19:30.648 "write": true, 00:19:30.648 "unmap": true, 00:19:30.648 "flush": true, 00:19:30.648 "reset": true, 00:19:30.648 "nvme_admin": false, 00:19:30.648 "nvme_io": false, 00:19:30.648 "nvme_io_md": false, 00:19:30.648 "write_zeroes": true, 00:19:30.648 "zcopy": true, 00:19:30.648 "get_zone_info": false, 00:19:30.648 "zone_management": false, 00:19:30.648 "zone_append": false, 00:19:30.648 "compare": false, 00:19:30.648 "compare_and_write": false, 00:19:30.648 "abort": true, 00:19:30.648 "seek_hole": false, 00:19:30.648 "seek_data": false, 00:19:30.648 "copy": true, 00:19:30.648 "nvme_iov_md": false 00:19:30.648 }, 00:19:30.648 "memory_domains": [ 00:19:30.648 { 00:19:30.648 "dma_device_id": "system", 00:19:30.648 "dma_device_type": 1 00:19:30.648 }, 00:19:30.648 { 00:19:30.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.648 "dma_device_type": 2 00:19:30.648 } 00:19:30.648 ], 00:19:30.648 "driver_specific": { 00:19:30.648 "passthru": { 00:19:30.648 "name": "pt2", 00:19:30.648 "base_bdev_name": "malloc2" 00:19:30.648 } 00:19:30.648 } 00:19:30.648 }' 00:19:30.648 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.648 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:30.648 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:30.648 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.648 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:30.907 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.165 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.165 "name": "pt3", 00:19:31.165 "aliases": [ 00:19:31.165 "00000000-0000-0000-0000-000000000003" 00:19:31.165 ], 00:19:31.165 "product_name": "passthru", 00:19:31.165 "block_size": 512, 00:19:31.165 "num_blocks": 65536, 00:19:31.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:31.165 "assigned_rate_limits": { 00:19:31.165 "rw_ios_per_sec": 0, 00:19:31.165 "rw_mbytes_per_sec": 0, 00:19:31.165 "r_mbytes_per_sec": 0, 00:19:31.165 "w_mbytes_per_sec": 0 00:19:31.165 }, 00:19:31.165 "claimed": true, 00:19:31.165 "claim_type": "exclusive_write", 00:19:31.165 "zoned": false, 00:19:31.165 "supported_io_types": { 00:19:31.165 "read": true, 00:19:31.165 "write": true, 00:19:31.165 "unmap": true, 00:19:31.165 "flush": true, 00:19:31.165 "reset": true, 00:19:31.165 "nvme_admin": false, 00:19:31.165 "nvme_io": false, 00:19:31.165 "nvme_io_md": false, 00:19:31.165 "write_zeroes": true, 00:19:31.165 "zcopy": true, 00:19:31.165 "get_zone_info": false, 00:19:31.165 "zone_management": false, 00:19:31.165 "zone_append": false, 00:19:31.165 "compare": false, 00:19:31.165 "compare_and_write": false, 00:19:31.165 "abort": true, 00:19:31.165 "seek_hole": false, 00:19:31.165 "seek_data": false, 00:19:31.165 "copy": true, 00:19:31.165 "nvme_iov_md": false 00:19:31.165 }, 00:19:31.165 "memory_domains": [ 00:19:31.165 { 00:19:31.165 "dma_device_id": "system", 00:19:31.165 "dma_device_type": 1 00:19:31.165 }, 00:19:31.165 { 00:19:31.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.165 "dma_device_type": 2 00:19:31.165 } 00:19:31.165 ], 00:19:31.165 "driver_specific": { 00:19:31.165 "passthru": { 00:19:31.165 "name": "pt3", 00:19:31.165 "base_bdev_name": "malloc3" 00:19:31.165 } 00:19:31.165 } 00:19:31.165 }' 00:19:31.165 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.165 16:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.165 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.165 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.423 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.424 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:31.424 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:31.682 [2024-07-24 16:36:28.476880] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 1f48f0e4-6a34-4db1-8680-d845777ae91a '!=' 1f48f0e4-6a34-4db1-8680-d845777ae91a ']' 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1654180 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1654180 ']' 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1654180 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.682 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1654180 00:19:31.941 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.941 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.941 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1654180' 00:19:31.941 killing process with pid 1654180 00:19:31.941 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1654180 00:19:31.941 [2024-07-24 16:36:28.554113] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.941 [2024-07-24 16:36:28.554218] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.941 [2024-07-24 16:36:28.554290] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.941 16:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1654180 00:19:31.941 [2024-07-24 16:36:28.554309] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:19:32.199 [2024-07-24 16:36:28.892927] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.100 16:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:34.100 00:19:34.100 real 0m15.612s 00:19:34.100 user 0m26.081s 00:19:34.100 sys 0m2.642s 00:19:34.100 16:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.100 16:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.100 ************************************ 00:19:34.100 END TEST raid_superblock_test 00:19:34.100 ************************************ 00:19:34.100 16:36:30 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:34.100 16:36:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:34.100 16:36:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.100 16:36:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.100 ************************************ 00:19:34.100 START TEST raid_read_error_test 00:19:34.100 ************************************ 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:34.100 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.YoScmtCLZb 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1657073 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1657073 /var/tmp/spdk-raid.sock 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1657073 ']' 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:34.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.101 16:36:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.101 [2024-07-24 16:36:30.844271] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:19:34.101 [2024-07-24 16:36:30.844391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1657073 ] 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:34.359 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.359 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:34.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:34.360 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:34.360 [2024-07-24 16:36:31.067887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.650 [2024-07-24 16:36:31.357602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.909 [2024-07-24 16:36:31.690953] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.909 [2024-07-24 16:36:31.690988] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.168 16:36:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.168 16:36:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:35.168 16:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:35.168 16:36:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:35.426 BaseBdev1_malloc 00:19:35.426 16:36:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:35.684 true 00:19:35.684 16:36:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:35.942 [2024-07-24 16:36:32.585024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:35.942 [2024-07-24 16:36:32.585084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.942 [2024-07-24 16:36:32.585111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:19:35.942 [2024-07-24 16:36:32.585132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.942 [2024-07-24 16:36:32.587938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.942 [2024-07-24 16:36:32.587977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.942 BaseBdev1 00:19:35.942 16:36:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:35.942 16:36:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:36.201 BaseBdev2_malloc 00:19:36.201 16:36:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:36.460 true 00:19:36.460 16:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:36.719 [2024-07-24 16:36:33.323694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:36.719 [2024-07-24 16:36:33.323756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.719 [2024-07-24 16:36:33.323781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:19:36.719 [2024-07-24 16:36:33.323802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.719 [2024-07-24 16:36:33.326582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.719 [2024-07-24 16:36:33.326619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:36.719 BaseBdev2 00:19:36.719 16:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:36.719 16:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:36.977 BaseBdev3_malloc 00:19:36.977 16:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:36.977 true 00:19:37.236 16:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:37.236 [2024-07-24 16:36:34.058362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:37.236 [2024-07-24 16:36:34.058420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.236 [2024-07-24 16:36:34.058446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:19:37.236 [2024-07-24 16:36:34.058464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.236 [2024-07-24 16:36:34.061229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.236 [2024-07-24 16:36:34.061266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:37.236 BaseBdev3 00:19:37.236 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:37.494 [2024-07-24 16:36:34.270963] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:37.494 [2024-07-24 16:36:34.273331] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.494 [2024-07-24 16:36:34.273422] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:37.494 [2024-07-24 16:36:34.273715] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:19:37.494 [2024-07-24 16:36:34.273734] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:37.495 [2024-07-24 16:36:34.274068] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:37.495 [2024-07-24 16:36:34.274342] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:19:37.495 [2024-07-24 16:36:34.274364] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:19:37.495 [2024-07-24 16:36:34.274572] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.495 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.753 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:37.753 "name": "raid_bdev1", 00:19:37.753 "uuid": "77670996-1dc4-439b-a353-3ff18b7ccbff", 00:19:37.753 "strip_size_kb": 64, 00:19:37.753 "state": "online", 00:19:37.753 "raid_level": "concat", 00:19:37.753 "superblock": true, 00:19:37.753 "num_base_bdevs": 3, 00:19:37.753 "num_base_bdevs_discovered": 3, 00:19:37.753 "num_base_bdevs_operational": 3, 00:19:37.753 "base_bdevs_list": [ 00:19:37.753 { 00:19:37.753 "name": "BaseBdev1", 00:19:37.753 "uuid": "4012d287-2eab-5548-9b87-d9f6134f297d", 00:19:37.753 "is_configured": true, 00:19:37.753 "data_offset": 2048, 00:19:37.753 "data_size": 63488 00:19:37.753 }, 00:19:37.753 { 00:19:37.753 "name": "BaseBdev2", 00:19:37.753 "uuid": "f1830c84-226e-53f8-9c8a-2b37b3acc32d", 00:19:37.753 "is_configured": true, 00:19:37.753 "data_offset": 2048, 00:19:37.753 "data_size": 63488 00:19:37.753 }, 00:19:37.753 { 00:19:37.753 "name": "BaseBdev3", 00:19:37.753 "uuid": "30649c40-8c95-5a4a-9b00-13115d337b0e", 00:19:37.753 "is_configured": true, 00:19:37.753 "data_offset": 2048, 00:19:37.753 "data_size": 63488 00:19:37.753 } 00:19:37.753 ] 00:19:37.753 }' 00:19:37.753 16:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:37.753 16:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.320 16:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:38.320 16:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:38.578 [2024-07-24 16:36:35.187483] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.513 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.771 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.771 "name": "raid_bdev1", 00:19:39.771 "uuid": "77670996-1dc4-439b-a353-3ff18b7ccbff", 00:19:39.771 "strip_size_kb": 64, 00:19:39.771 "state": "online", 00:19:39.771 "raid_level": "concat", 00:19:39.771 "superblock": true, 00:19:39.771 "num_base_bdevs": 3, 00:19:39.771 "num_base_bdevs_discovered": 3, 00:19:39.771 "num_base_bdevs_operational": 3, 00:19:39.771 "base_bdevs_list": [ 00:19:39.771 { 00:19:39.771 "name": "BaseBdev1", 00:19:39.771 "uuid": "4012d287-2eab-5548-9b87-d9f6134f297d", 00:19:39.771 "is_configured": true, 00:19:39.771 "data_offset": 2048, 00:19:39.771 "data_size": 63488 00:19:39.771 }, 00:19:39.771 { 00:19:39.771 "name": "BaseBdev2", 00:19:39.771 "uuid": "f1830c84-226e-53f8-9c8a-2b37b3acc32d", 00:19:39.771 "is_configured": true, 00:19:39.771 "data_offset": 2048, 00:19:39.771 "data_size": 63488 00:19:39.771 }, 00:19:39.771 { 00:19:39.771 "name": "BaseBdev3", 00:19:39.771 "uuid": "30649c40-8c95-5a4a-9b00-13115d337b0e", 00:19:39.771 "is_configured": true, 00:19:39.771 "data_offset": 2048, 00:19:39.771 "data_size": 63488 00:19:39.771 } 00:19:39.771 ] 00:19:39.771 }' 00:19:39.771 16:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.771 16:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.336 16:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:40.595 [2024-07-24 16:36:37.375907] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.595 [2024-07-24 16:36:37.375946] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.595 [2024-07-24 16:36:37.379219] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.595 [2024-07-24 16:36:37.379268] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.595 [2024-07-24 16:36:37.379316] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.595 [2024-07-24 16:36:37.379334] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:19:40.595 0 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1657073 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1657073 ']' 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1657073 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1657073 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1657073' 00:19:40.595 killing process with pid 1657073 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1657073 00:19:40.595 [2024-07-24 16:36:37.452485] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.595 16:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1657073 00:19:40.939 [2024-07-24 16:36:37.687610] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.YoScmtCLZb 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:42.832 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:42.833 16:36:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:19:42.833 00:19:42.833 real 0m8.770s 00:19:42.833 user 0m12.440s 00:19:42.833 sys 0m1.310s 00:19:42.833 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.833 16:36:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.833 ************************************ 00:19:42.833 END TEST raid_read_error_test 00:19:42.833 ************************************ 00:19:42.833 16:36:39 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:42.833 16:36:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:42.833 16:36:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.833 16:36:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.833 ************************************ 00:19:42.833 START TEST raid_write_error_test 00:19:42.833 ************************************ 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.ORry3TnLfD 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1658751 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1658751 /var/tmp/spdk-raid.sock 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1658751 ']' 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:42.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:42.833 16:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.091 [2024-07-24 16:36:39.705233] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:19:43.091 [2024-07-24 16:36:39.705356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1658751 ] 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:43.091 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:43.091 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:43.091 [2024-07-24 16:36:39.929225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.655 [2024-07-24 16:36:40.222055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.912 [2024-07-24 16:36:40.573697] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.912 [2024-07-24 16:36:40.573733] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.912 16:36:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.912 16:36:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:43.912 16:36:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:43.912 16:36:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:44.169 BaseBdev1_malloc 00:19:44.169 16:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:44.426 true 00:19:44.426 16:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:44.683 [2024-07-24 16:36:41.441259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:44.683 [2024-07-24 16:36:41.441317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.683 [2024-07-24 16:36:41.441342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:19:44.683 [2024-07-24 16:36:41.441363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.683 [2024-07-24 16:36:41.444095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.683 [2024-07-24 16:36:41.444133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:44.683 BaseBdev1 00:19:44.683 16:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:44.683 16:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:44.940 BaseBdev2_malloc 00:19:44.940 16:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:45.197 true 00:19:45.197 16:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:45.454 [2024-07-24 16:36:42.174990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:45.454 [2024-07-24 16:36:42.175046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.454 [2024-07-24 16:36:42.175070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:19:45.454 [2024-07-24 16:36:42.175090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.454 [2024-07-24 16:36:42.177824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.454 [2024-07-24 16:36:42.177859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:45.454 BaseBdev2 00:19:45.454 16:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:45.454 16:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:45.712 BaseBdev3_malloc 00:19:45.712 16:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:45.997 true 00:19:45.997 16:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:46.255 [2024-07-24 16:36:42.899590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:46.255 [2024-07-24 16:36:42.899654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.255 [2024-07-24 16:36:42.899681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:19:46.255 [2024-07-24 16:36:42.899699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.255 [2024-07-24 16:36:42.902530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.255 [2024-07-24 16:36:42.902567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.255 BaseBdev3 00:19:46.255 16:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:46.513 [2024-07-24 16:36:43.128257] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.513 [2024-07-24 16:36:43.130624] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.513 [2024-07-24 16:36:43.130712] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:46.513 [2024-07-24 16:36:43.130998] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:19:46.513 [2024-07-24 16:36:43.131016] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:46.513 [2024-07-24 16:36:43.131360] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:46.513 [2024-07-24 16:36:43.131626] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:19:46.513 [2024-07-24 16:36:43.131647] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:19:46.513 [2024-07-24 16:36:43.131873] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.513 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.771 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.771 "name": "raid_bdev1", 00:19:46.771 "uuid": "f4e62268-752e-4f3c-855b-8e8de522edcb", 00:19:46.771 "strip_size_kb": 64, 00:19:46.771 "state": "online", 00:19:46.771 "raid_level": "concat", 00:19:46.771 "superblock": true, 00:19:46.771 "num_base_bdevs": 3, 00:19:46.771 "num_base_bdevs_discovered": 3, 00:19:46.771 "num_base_bdevs_operational": 3, 00:19:46.771 "base_bdevs_list": [ 00:19:46.771 { 00:19:46.771 "name": "BaseBdev1", 00:19:46.771 "uuid": "63fe8a6e-0c29-532a-86ad-a9b0f5303057", 00:19:46.771 "is_configured": true, 00:19:46.771 "data_offset": 2048, 00:19:46.771 "data_size": 63488 00:19:46.771 }, 00:19:46.771 { 00:19:46.771 "name": "BaseBdev2", 00:19:46.771 "uuid": "249a7c7f-db14-5d54-8514-c4beab17af05", 00:19:46.771 "is_configured": true, 00:19:46.771 "data_offset": 2048, 00:19:46.771 "data_size": 63488 00:19:46.771 }, 00:19:46.771 { 00:19:46.771 "name": "BaseBdev3", 00:19:46.771 "uuid": "6a10e7d0-1147-51e6-963d-09f697bb81e3", 00:19:46.771 "is_configured": true, 00:19:46.771 "data_offset": 2048, 00:19:46.771 "data_size": 63488 00:19:46.771 } 00:19:46.771 ] 00:19:46.771 }' 00:19:46.771 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.771 16:36:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.337 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:47.337 16:36:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:47.337 [2024-07-24 16:36:44.056584] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:19:48.273 16:36:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.532 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.790 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.790 "name": "raid_bdev1", 00:19:48.790 "uuid": "f4e62268-752e-4f3c-855b-8e8de522edcb", 00:19:48.790 "strip_size_kb": 64, 00:19:48.790 "state": "online", 00:19:48.790 "raid_level": "concat", 00:19:48.790 "superblock": true, 00:19:48.790 "num_base_bdevs": 3, 00:19:48.790 "num_base_bdevs_discovered": 3, 00:19:48.790 "num_base_bdevs_operational": 3, 00:19:48.790 "base_bdevs_list": [ 00:19:48.790 { 00:19:48.790 "name": "BaseBdev1", 00:19:48.790 "uuid": "63fe8a6e-0c29-532a-86ad-a9b0f5303057", 00:19:48.790 "is_configured": true, 00:19:48.790 "data_offset": 2048, 00:19:48.790 "data_size": 63488 00:19:48.790 }, 00:19:48.790 { 00:19:48.790 "name": "BaseBdev2", 00:19:48.790 "uuid": "249a7c7f-db14-5d54-8514-c4beab17af05", 00:19:48.790 "is_configured": true, 00:19:48.790 "data_offset": 2048, 00:19:48.790 "data_size": 63488 00:19:48.790 }, 00:19:48.790 { 00:19:48.790 "name": "BaseBdev3", 00:19:48.790 "uuid": "6a10e7d0-1147-51e6-963d-09f697bb81e3", 00:19:48.790 "is_configured": true, 00:19:48.790 "data_offset": 2048, 00:19:48.790 "data_size": 63488 00:19:48.790 } 00:19:48.790 ] 00:19:48.790 }' 00:19:48.790 16:36:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.790 16:36:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.358 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:49.358 [2024-07-24 16:36:46.219513] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.358 [2024-07-24 16:36:46.219557] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.617 [2024-07-24 16:36:46.222851] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.617 [2024-07-24 16:36:46.222905] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.617 [2024-07-24 16:36:46.222954] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.617 [2024-07-24 16:36:46.222970] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:19:49.617 0 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1658751 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1658751 ']' 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1658751 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1658751 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1658751' 00:19:49.617 killing process with pid 1658751 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1658751 00:19:49.617 [2024-07-24 16:36:46.300004] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.617 16:36:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1658751 00:19:49.875 [2024-07-24 16:36:46.537120] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.ORry3TnLfD 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:19:51.777 00:19:51.777 real 0m8.809s 00:19:51.777 user 0m12.422s 00:19:51.777 sys 0m1.366s 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:51.777 16:36:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.777 ************************************ 00:19:51.777 END TEST raid_write_error_test 00:19:51.777 ************************************ 00:19:51.777 16:36:48 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:19:51.777 16:36:48 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:51.777 16:36:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:51.777 16:36:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:51.778 16:36:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.778 ************************************ 00:19:51.778 START TEST raid_state_function_test 00:19:51.778 ************************************ 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1660290 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1660290' 00:19:51.778 Process raid pid: 1660290 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1660290 /var/tmp/spdk-raid.sock 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1660290 ']' 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.778 16:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.778 [2024-07-24 16:36:48.587474] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:19:51.778 [2024-07-24 16:36:48.587592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:52.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.036 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:52.036 [2024-07-24 16:36:48.815254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.294 [2024-07-24 16:36:49.099449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.860 [2024-07-24 16:36:49.435924] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.860 [2024-07-24 16:36:49.435961] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.860 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.860 16:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:52.860 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:53.118 [2024-07-24 16:36:49.808711] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:53.118 [2024-07-24 16:36:49.808766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:53.118 [2024-07-24 16:36:49.808782] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.118 [2024-07-24 16:36:49.808799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.118 [2024-07-24 16:36:49.808810] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.118 [2024-07-24 16:36:49.808826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.118 16:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.376 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.376 "name": "Existed_Raid", 00:19:53.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.376 "strip_size_kb": 0, 00:19:53.376 "state": "configuring", 00:19:53.376 "raid_level": "raid1", 00:19:53.376 "superblock": false, 00:19:53.376 "num_base_bdevs": 3, 00:19:53.376 "num_base_bdevs_discovered": 0, 00:19:53.376 "num_base_bdevs_operational": 3, 00:19:53.376 "base_bdevs_list": [ 00:19:53.376 { 00:19:53.376 "name": "BaseBdev1", 00:19:53.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.376 "is_configured": false, 00:19:53.376 "data_offset": 0, 00:19:53.376 "data_size": 0 00:19:53.376 }, 00:19:53.376 { 00:19:53.376 "name": "BaseBdev2", 00:19:53.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.376 "is_configured": false, 00:19:53.376 "data_offset": 0, 00:19:53.376 "data_size": 0 00:19:53.376 }, 00:19:53.376 { 00:19:53.376 "name": "BaseBdev3", 00:19:53.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.376 "is_configured": false, 00:19:53.376 "data_offset": 0, 00:19:53.376 "data_size": 0 00:19:53.376 } 00:19:53.376 ] 00:19:53.376 }' 00:19:53.376 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.376 16:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:53.941 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:54.207 [2024-07-24 16:36:50.843357] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:54.207 [2024-07-24 16:36:50.843396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:19:54.207 16:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:54.465 [2024-07-24 16:36:51.072002] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.465 [2024-07-24 16:36:51.072045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.465 [2024-07-24 16:36:51.072059] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.465 [2024-07-24 16:36:51.072079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.465 [2024-07-24 16:36:51.072090] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.465 [2024-07-24 16:36:51.072106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.465 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.722 [2024-07-24 16:36:51.347829] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.722 BaseBdev1 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:54.722 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.980 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:54.980 [ 00:19:54.980 { 00:19:54.980 "name": "BaseBdev1", 00:19:54.980 "aliases": [ 00:19:54.980 "07cd164b-cdd3-49b2-8f50-b2733da25023" 00:19:54.980 ], 00:19:54.980 "product_name": "Malloc disk", 00:19:54.980 "block_size": 512, 00:19:54.980 "num_blocks": 65536, 00:19:54.980 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:19:54.980 "assigned_rate_limits": { 00:19:54.980 "rw_ios_per_sec": 0, 00:19:54.980 "rw_mbytes_per_sec": 0, 00:19:54.980 "r_mbytes_per_sec": 0, 00:19:54.980 "w_mbytes_per_sec": 0 00:19:54.980 }, 00:19:54.980 "claimed": true, 00:19:54.980 "claim_type": "exclusive_write", 00:19:54.980 "zoned": false, 00:19:54.980 "supported_io_types": { 00:19:54.980 "read": true, 00:19:54.980 "write": true, 00:19:54.981 "unmap": true, 00:19:54.981 "flush": true, 00:19:54.981 "reset": true, 00:19:54.981 "nvme_admin": false, 00:19:54.981 "nvme_io": false, 00:19:54.981 "nvme_io_md": false, 00:19:54.981 "write_zeroes": true, 00:19:54.981 "zcopy": true, 00:19:54.981 "get_zone_info": false, 00:19:54.981 "zone_management": false, 00:19:54.981 "zone_append": false, 00:19:54.981 "compare": false, 00:19:54.981 "compare_and_write": false, 00:19:54.981 "abort": true, 00:19:54.981 "seek_hole": false, 00:19:54.981 "seek_data": false, 00:19:54.981 "copy": true, 00:19:54.981 "nvme_iov_md": false 00:19:54.981 }, 00:19:54.981 "memory_domains": [ 00:19:54.981 { 00:19:54.981 "dma_device_id": "system", 00:19:54.981 "dma_device_type": 1 00:19:54.981 }, 00:19:54.981 { 00:19:54.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.981 "dma_device_type": 2 00:19:54.981 } 00:19:54.981 ], 00:19:54.981 "driver_specific": {} 00:19:54.981 } 00:19:54.981 ] 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.981 16:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.239 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.239 "name": "Existed_Raid", 00:19:55.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.239 "strip_size_kb": 0, 00:19:55.239 "state": "configuring", 00:19:55.239 "raid_level": "raid1", 00:19:55.239 "superblock": false, 00:19:55.239 "num_base_bdevs": 3, 00:19:55.239 "num_base_bdevs_discovered": 1, 00:19:55.239 "num_base_bdevs_operational": 3, 00:19:55.239 "base_bdevs_list": [ 00:19:55.239 { 00:19:55.239 "name": "BaseBdev1", 00:19:55.239 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:19:55.239 "is_configured": true, 00:19:55.239 "data_offset": 0, 00:19:55.239 "data_size": 65536 00:19:55.239 }, 00:19:55.239 { 00:19:55.239 "name": "BaseBdev2", 00:19:55.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.239 "is_configured": false, 00:19:55.239 "data_offset": 0, 00:19:55.239 "data_size": 0 00:19:55.239 }, 00:19:55.239 { 00:19:55.239 "name": "BaseBdev3", 00:19:55.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.239 "is_configured": false, 00:19:55.239 "data_offset": 0, 00:19:55.239 "data_size": 0 00:19:55.239 } 00:19:55.239 ] 00:19:55.239 }' 00:19:55.239 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.239 16:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.805 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:56.064 [2024-07-24 16:36:52.836041] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.064 [2024-07-24 16:36:52.836093] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:19:56.064 16:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:56.322 [2024-07-24 16:36:53.008603] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.322 [2024-07-24 16:36:53.010903] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.322 [2024-07-24 16:36:53.010947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.322 [2024-07-24 16:36:53.010961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:56.322 [2024-07-24 16:36:53.010978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.322 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.579 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.579 "name": "Existed_Raid", 00:19:56.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.579 "strip_size_kb": 0, 00:19:56.579 "state": "configuring", 00:19:56.579 "raid_level": "raid1", 00:19:56.579 "superblock": false, 00:19:56.579 "num_base_bdevs": 3, 00:19:56.579 "num_base_bdevs_discovered": 1, 00:19:56.579 "num_base_bdevs_operational": 3, 00:19:56.579 "base_bdevs_list": [ 00:19:56.579 { 00:19:56.579 "name": "BaseBdev1", 00:19:56.579 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:19:56.579 "is_configured": true, 00:19:56.579 "data_offset": 0, 00:19:56.579 "data_size": 65536 00:19:56.579 }, 00:19:56.579 { 00:19:56.579 "name": "BaseBdev2", 00:19:56.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.579 "is_configured": false, 00:19:56.579 "data_offset": 0, 00:19:56.579 "data_size": 0 00:19:56.579 }, 00:19:56.579 { 00:19:56.579 "name": "BaseBdev3", 00:19:56.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.579 "is_configured": false, 00:19:56.579 "data_offset": 0, 00:19:56.579 "data_size": 0 00:19:56.579 } 00:19:56.579 ] 00:19:56.579 }' 00:19:56.579 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.579 16:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.143 16:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:57.401 [2024-07-24 16:36:54.095111] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.401 BaseBdev2 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.401 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.658 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:57.915 [ 00:19:57.915 { 00:19:57.915 "name": "BaseBdev2", 00:19:57.915 "aliases": [ 00:19:57.915 "614fb298-988d-43f2-b2a1-3efd47063cbd" 00:19:57.915 ], 00:19:57.915 "product_name": "Malloc disk", 00:19:57.915 "block_size": 512, 00:19:57.915 "num_blocks": 65536, 00:19:57.915 "uuid": "614fb298-988d-43f2-b2a1-3efd47063cbd", 00:19:57.915 "assigned_rate_limits": { 00:19:57.915 "rw_ios_per_sec": 0, 00:19:57.915 "rw_mbytes_per_sec": 0, 00:19:57.915 "r_mbytes_per_sec": 0, 00:19:57.915 "w_mbytes_per_sec": 0 00:19:57.915 }, 00:19:57.915 "claimed": true, 00:19:57.915 "claim_type": "exclusive_write", 00:19:57.915 "zoned": false, 00:19:57.915 "supported_io_types": { 00:19:57.915 "read": true, 00:19:57.915 "write": true, 00:19:57.915 "unmap": true, 00:19:57.915 "flush": true, 00:19:57.915 "reset": true, 00:19:57.915 "nvme_admin": false, 00:19:57.915 "nvme_io": false, 00:19:57.915 "nvme_io_md": false, 00:19:57.915 "write_zeroes": true, 00:19:57.915 "zcopy": true, 00:19:57.915 "get_zone_info": false, 00:19:57.915 "zone_management": false, 00:19:57.915 "zone_append": false, 00:19:57.915 "compare": false, 00:19:57.915 "compare_and_write": false, 00:19:57.915 "abort": true, 00:19:57.915 "seek_hole": false, 00:19:57.915 "seek_data": false, 00:19:57.915 "copy": true, 00:19:57.915 "nvme_iov_md": false 00:19:57.915 }, 00:19:57.915 "memory_domains": [ 00:19:57.915 { 00:19:57.915 "dma_device_id": "system", 00:19:57.915 "dma_device_type": 1 00:19:57.915 }, 00:19:57.915 { 00:19:57.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.915 "dma_device_type": 2 00:19:57.915 } 00:19:57.915 ], 00:19:57.915 "driver_specific": {} 00:19:57.915 } 00:19:57.915 ] 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:57.915 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.916 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.173 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.173 "name": "Existed_Raid", 00:19:58.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.173 "strip_size_kb": 0, 00:19:58.173 "state": "configuring", 00:19:58.173 "raid_level": "raid1", 00:19:58.173 "superblock": false, 00:19:58.173 "num_base_bdevs": 3, 00:19:58.173 "num_base_bdevs_discovered": 2, 00:19:58.173 "num_base_bdevs_operational": 3, 00:19:58.173 "base_bdevs_list": [ 00:19:58.173 { 00:19:58.173 "name": "BaseBdev1", 00:19:58.173 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:19:58.173 "is_configured": true, 00:19:58.173 "data_offset": 0, 00:19:58.173 "data_size": 65536 00:19:58.173 }, 00:19:58.173 { 00:19:58.173 "name": "BaseBdev2", 00:19:58.173 "uuid": "614fb298-988d-43f2-b2a1-3efd47063cbd", 00:19:58.173 "is_configured": true, 00:19:58.173 "data_offset": 0, 00:19:58.173 "data_size": 65536 00:19:58.173 }, 00:19:58.173 { 00:19:58.173 "name": "BaseBdev3", 00:19:58.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.173 "is_configured": false, 00:19:58.173 "data_offset": 0, 00:19:58.173 "data_size": 0 00:19:58.173 } 00:19:58.173 ] 00:19:58.173 }' 00:19:58.173 16:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.173 16:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.738 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:58.996 [2024-07-24 16:36:55.636088] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:58.996 [2024-07-24 16:36:55.636134] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:19:58.996 [2024-07-24 16:36:55.636164] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:58.996 [2024-07-24 16:36:55.636496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:58.996 [2024-07-24 16:36:55.636742] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:19:58.996 [2024-07-24 16:36:55.636758] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:19:58.996 [2024-07-24 16:36:55.637085] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.996 BaseBdev3 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:58.996 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.254 16:36:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:59.254 [ 00:19:59.254 { 00:19:59.254 "name": "BaseBdev3", 00:19:59.254 "aliases": [ 00:19:59.254 "b9d40719-a978-4221-bce6-cc6fe4fada3e" 00:19:59.254 ], 00:19:59.254 "product_name": "Malloc disk", 00:19:59.254 "block_size": 512, 00:19:59.254 "num_blocks": 65536, 00:19:59.254 "uuid": "b9d40719-a978-4221-bce6-cc6fe4fada3e", 00:19:59.254 "assigned_rate_limits": { 00:19:59.254 "rw_ios_per_sec": 0, 00:19:59.254 "rw_mbytes_per_sec": 0, 00:19:59.254 "r_mbytes_per_sec": 0, 00:19:59.254 "w_mbytes_per_sec": 0 00:19:59.254 }, 00:19:59.254 "claimed": true, 00:19:59.254 "claim_type": "exclusive_write", 00:19:59.254 "zoned": false, 00:19:59.254 "supported_io_types": { 00:19:59.254 "read": true, 00:19:59.254 "write": true, 00:19:59.254 "unmap": true, 00:19:59.254 "flush": true, 00:19:59.254 "reset": true, 00:19:59.254 "nvme_admin": false, 00:19:59.254 "nvme_io": false, 00:19:59.254 "nvme_io_md": false, 00:19:59.254 "write_zeroes": true, 00:19:59.254 "zcopy": true, 00:19:59.254 "get_zone_info": false, 00:19:59.254 "zone_management": false, 00:19:59.254 "zone_append": false, 00:19:59.254 "compare": false, 00:19:59.254 "compare_and_write": false, 00:19:59.254 "abort": true, 00:19:59.254 "seek_hole": false, 00:19:59.254 "seek_data": false, 00:19:59.254 "copy": true, 00:19:59.254 "nvme_iov_md": false 00:19:59.254 }, 00:19:59.254 "memory_domains": [ 00:19:59.254 { 00:19:59.254 "dma_device_id": "system", 00:19:59.254 "dma_device_type": 1 00:19:59.254 }, 00:19:59.254 { 00:19:59.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.254 "dma_device_type": 2 00:19:59.254 } 00:19:59.254 ], 00:19:59.254 "driver_specific": {} 00:19:59.254 } 00:19:59.254 ] 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.254 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.512 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.512 "name": "Existed_Raid", 00:19:59.512 "uuid": "5497eb33-bbc1-4f3a-962a-35b54b3147ce", 00:19:59.512 "strip_size_kb": 0, 00:19:59.512 "state": "online", 00:19:59.512 "raid_level": "raid1", 00:19:59.512 "superblock": false, 00:19:59.512 "num_base_bdevs": 3, 00:19:59.512 "num_base_bdevs_discovered": 3, 00:19:59.512 "num_base_bdevs_operational": 3, 00:19:59.512 "base_bdevs_list": [ 00:19:59.512 { 00:19:59.512 "name": "BaseBdev1", 00:19:59.512 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:19:59.512 "is_configured": true, 00:19:59.512 "data_offset": 0, 00:19:59.512 "data_size": 65536 00:19:59.512 }, 00:19:59.512 { 00:19:59.512 "name": "BaseBdev2", 00:19:59.512 "uuid": "614fb298-988d-43f2-b2a1-3efd47063cbd", 00:19:59.512 "is_configured": true, 00:19:59.512 "data_offset": 0, 00:19:59.512 "data_size": 65536 00:19:59.512 }, 00:19:59.512 { 00:19:59.512 "name": "BaseBdev3", 00:19:59.512 "uuid": "b9d40719-a978-4221-bce6-cc6fe4fada3e", 00:19:59.512 "is_configured": true, 00:19:59.512 "data_offset": 0, 00:19:59.512 "data_size": 65536 00:19:59.512 } 00:19:59.512 ] 00:19:59.512 }' 00:19:59.512 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.512 16:36:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:00.077 16:36:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:00.334 [2024-07-24 16:36:57.128519] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:00.334 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:00.334 "name": "Existed_Raid", 00:20:00.334 "aliases": [ 00:20:00.334 "5497eb33-bbc1-4f3a-962a-35b54b3147ce" 00:20:00.334 ], 00:20:00.334 "product_name": "Raid Volume", 00:20:00.334 "block_size": 512, 00:20:00.334 "num_blocks": 65536, 00:20:00.334 "uuid": "5497eb33-bbc1-4f3a-962a-35b54b3147ce", 00:20:00.334 "assigned_rate_limits": { 00:20:00.334 "rw_ios_per_sec": 0, 00:20:00.334 "rw_mbytes_per_sec": 0, 00:20:00.334 "r_mbytes_per_sec": 0, 00:20:00.334 "w_mbytes_per_sec": 0 00:20:00.334 }, 00:20:00.334 "claimed": false, 00:20:00.334 "zoned": false, 00:20:00.334 "supported_io_types": { 00:20:00.334 "read": true, 00:20:00.334 "write": true, 00:20:00.334 "unmap": false, 00:20:00.334 "flush": false, 00:20:00.334 "reset": true, 00:20:00.334 "nvme_admin": false, 00:20:00.334 "nvme_io": false, 00:20:00.334 "nvme_io_md": false, 00:20:00.334 "write_zeroes": true, 00:20:00.334 "zcopy": false, 00:20:00.334 "get_zone_info": false, 00:20:00.334 "zone_management": false, 00:20:00.334 "zone_append": false, 00:20:00.334 "compare": false, 00:20:00.334 "compare_and_write": false, 00:20:00.334 "abort": false, 00:20:00.334 "seek_hole": false, 00:20:00.334 "seek_data": false, 00:20:00.334 "copy": false, 00:20:00.334 "nvme_iov_md": false 00:20:00.334 }, 00:20:00.334 "memory_domains": [ 00:20:00.334 { 00:20:00.334 "dma_device_id": "system", 00:20:00.334 "dma_device_type": 1 00:20:00.334 }, 00:20:00.334 { 00:20:00.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.334 "dma_device_type": 2 00:20:00.334 }, 00:20:00.334 { 00:20:00.334 "dma_device_id": "system", 00:20:00.334 "dma_device_type": 1 00:20:00.334 }, 00:20:00.334 { 00:20:00.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.334 "dma_device_type": 2 00:20:00.334 }, 00:20:00.334 { 00:20:00.334 "dma_device_id": "system", 00:20:00.334 "dma_device_type": 1 00:20:00.334 }, 00:20:00.334 { 00:20:00.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.334 "dma_device_type": 2 00:20:00.334 } 00:20:00.334 ], 00:20:00.334 "driver_specific": { 00:20:00.334 "raid": { 00:20:00.334 "uuid": "5497eb33-bbc1-4f3a-962a-35b54b3147ce", 00:20:00.334 "strip_size_kb": 0, 00:20:00.334 "state": "online", 00:20:00.334 "raid_level": "raid1", 00:20:00.334 "superblock": false, 00:20:00.334 "num_base_bdevs": 3, 00:20:00.334 "num_base_bdevs_discovered": 3, 00:20:00.334 "num_base_bdevs_operational": 3, 00:20:00.334 "base_bdevs_list": [ 00:20:00.334 { 00:20:00.334 "name": "BaseBdev1", 00:20:00.334 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:20:00.334 "is_configured": true, 00:20:00.334 "data_offset": 0, 00:20:00.334 "data_size": 65536 00:20:00.334 }, 00:20:00.334 { 00:20:00.334 "name": "BaseBdev2", 00:20:00.334 "uuid": "614fb298-988d-43f2-b2a1-3efd47063cbd", 00:20:00.334 "is_configured": true, 00:20:00.335 "data_offset": 0, 00:20:00.335 "data_size": 65536 00:20:00.335 }, 00:20:00.335 { 00:20:00.335 "name": "BaseBdev3", 00:20:00.335 "uuid": "b9d40719-a978-4221-bce6-cc6fe4fada3e", 00:20:00.335 "is_configured": true, 00:20:00.335 "data_offset": 0, 00:20:00.335 "data_size": 65536 00:20:00.335 } 00:20:00.335 ] 00:20:00.335 } 00:20:00.335 } 00:20:00.335 }' 00:20:00.335 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:00.592 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:00.592 BaseBdev2 00:20:00.592 BaseBdev3' 00:20:00.592 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:00.592 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:00.592 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:00.592 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:00.592 "name": "BaseBdev1", 00:20:00.592 "aliases": [ 00:20:00.592 "07cd164b-cdd3-49b2-8f50-b2733da25023" 00:20:00.592 ], 00:20:00.592 "product_name": "Malloc disk", 00:20:00.592 "block_size": 512, 00:20:00.592 "num_blocks": 65536, 00:20:00.592 "uuid": "07cd164b-cdd3-49b2-8f50-b2733da25023", 00:20:00.592 "assigned_rate_limits": { 00:20:00.592 "rw_ios_per_sec": 0, 00:20:00.592 "rw_mbytes_per_sec": 0, 00:20:00.592 "r_mbytes_per_sec": 0, 00:20:00.592 "w_mbytes_per_sec": 0 00:20:00.592 }, 00:20:00.592 "claimed": true, 00:20:00.592 "claim_type": "exclusive_write", 00:20:00.592 "zoned": false, 00:20:00.592 "supported_io_types": { 00:20:00.592 "read": true, 00:20:00.592 "write": true, 00:20:00.592 "unmap": true, 00:20:00.592 "flush": true, 00:20:00.592 "reset": true, 00:20:00.592 "nvme_admin": false, 00:20:00.592 "nvme_io": false, 00:20:00.592 "nvme_io_md": false, 00:20:00.592 "write_zeroes": true, 00:20:00.592 "zcopy": true, 00:20:00.592 "get_zone_info": false, 00:20:00.592 "zone_management": false, 00:20:00.592 "zone_append": false, 00:20:00.592 "compare": false, 00:20:00.592 "compare_and_write": false, 00:20:00.592 "abort": true, 00:20:00.592 "seek_hole": false, 00:20:00.592 "seek_data": false, 00:20:00.592 "copy": true, 00:20:00.592 "nvme_iov_md": false 00:20:00.592 }, 00:20:00.592 "memory_domains": [ 00:20:00.592 { 00:20:00.592 "dma_device_id": "system", 00:20:00.592 "dma_device_type": 1 00:20:00.592 }, 00:20:00.592 { 00:20:00.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.592 "dma_device_type": 2 00:20:00.592 } 00:20:00.592 ], 00:20:00.592 "driver_specific": {} 00:20:00.592 }' 00:20:00.592 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:00.873 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.171 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.171 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:01.171 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.171 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:01.171 16:36:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:01.171 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:01.171 "name": "BaseBdev2", 00:20:01.171 "aliases": [ 00:20:01.171 "614fb298-988d-43f2-b2a1-3efd47063cbd" 00:20:01.171 ], 00:20:01.171 "product_name": "Malloc disk", 00:20:01.171 "block_size": 512, 00:20:01.171 "num_blocks": 65536, 00:20:01.171 "uuid": "614fb298-988d-43f2-b2a1-3efd47063cbd", 00:20:01.171 "assigned_rate_limits": { 00:20:01.171 "rw_ios_per_sec": 0, 00:20:01.171 "rw_mbytes_per_sec": 0, 00:20:01.171 "r_mbytes_per_sec": 0, 00:20:01.171 "w_mbytes_per_sec": 0 00:20:01.171 }, 00:20:01.171 "claimed": true, 00:20:01.171 "claim_type": "exclusive_write", 00:20:01.171 "zoned": false, 00:20:01.171 "supported_io_types": { 00:20:01.171 "read": true, 00:20:01.171 "write": true, 00:20:01.171 "unmap": true, 00:20:01.171 "flush": true, 00:20:01.171 "reset": true, 00:20:01.171 "nvme_admin": false, 00:20:01.171 "nvme_io": false, 00:20:01.171 "nvme_io_md": false, 00:20:01.171 "write_zeroes": true, 00:20:01.171 "zcopy": true, 00:20:01.171 "get_zone_info": false, 00:20:01.171 "zone_management": false, 00:20:01.171 "zone_append": false, 00:20:01.171 "compare": false, 00:20:01.171 "compare_and_write": false, 00:20:01.171 "abort": true, 00:20:01.171 "seek_hole": false, 00:20:01.171 "seek_data": false, 00:20:01.171 "copy": true, 00:20:01.171 "nvme_iov_md": false 00:20:01.171 }, 00:20:01.171 "memory_domains": [ 00:20:01.171 { 00:20:01.171 "dma_device_id": "system", 00:20:01.171 "dma_device_type": 1 00:20:01.171 }, 00:20:01.171 { 00:20:01.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.171 "dma_device_type": 2 00:20:01.171 } 00:20:01.171 ], 00:20:01.171 "driver_specific": {} 00:20:01.171 }' 00:20:01.171 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:01.430 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.688 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:01.688 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:01.688 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.688 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:01.689 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:01.689 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:01.689 "name": "BaseBdev3", 00:20:01.689 "aliases": [ 00:20:01.689 "b9d40719-a978-4221-bce6-cc6fe4fada3e" 00:20:01.689 ], 00:20:01.689 "product_name": "Malloc disk", 00:20:01.689 "block_size": 512, 00:20:01.689 "num_blocks": 65536, 00:20:01.689 "uuid": "b9d40719-a978-4221-bce6-cc6fe4fada3e", 00:20:01.689 "assigned_rate_limits": { 00:20:01.689 "rw_ios_per_sec": 0, 00:20:01.689 "rw_mbytes_per_sec": 0, 00:20:01.689 "r_mbytes_per_sec": 0, 00:20:01.689 "w_mbytes_per_sec": 0 00:20:01.689 }, 00:20:01.689 "claimed": true, 00:20:01.689 "claim_type": "exclusive_write", 00:20:01.689 "zoned": false, 00:20:01.689 "supported_io_types": { 00:20:01.689 "read": true, 00:20:01.689 "write": true, 00:20:01.689 "unmap": true, 00:20:01.689 "flush": true, 00:20:01.689 "reset": true, 00:20:01.689 "nvme_admin": false, 00:20:01.689 "nvme_io": false, 00:20:01.689 "nvme_io_md": false, 00:20:01.689 "write_zeroes": true, 00:20:01.689 "zcopy": true, 00:20:01.689 "get_zone_info": false, 00:20:01.689 "zone_management": false, 00:20:01.689 "zone_append": false, 00:20:01.689 "compare": false, 00:20:01.689 "compare_and_write": false, 00:20:01.689 "abort": true, 00:20:01.689 "seek_hole": false, 00:20:01.689 "seek_data": false, 00:20:01.689 "copy": true, 00:20:01.689 "nvme_iov_md": false 00:20:01.689 }, 00:20:01.689 "memory_domains": [ 00:20:01.689 { 00:20:01.689 "dma_device_id": "system", 00:20:01.689 "dma_device_type": 1 00:20:01.689 }, 00:20:01.689 { 00:20:01.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.689 "dma_device_type": 2 00:20:01.689 } 00:20:01.689 ], 00:20:01.689 "driver_specific": {} 00:20:01.689 }' 00:20:01.689 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:01.947 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.206 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.206 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.206 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.206 16:36:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:02.465 [2024-07-24 16:36:59.105550] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.465 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.724 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:02.724 "name": "Existed_Raid", 00:20:02.724 "uuid": "5497eb33-bbc1-4f3a-962a-35b54b3147ce", 00:20:02.724 "strip_size_kb": 0, 00:20:02.724 "state": "online", 00:20:02.724 "raid_level": "raid1", 00:20:02.724 "superblock": false, 00:20:02.724 "num_base_bdevs": 3, 00:20:02.724 "num_base_bdevs_discovered": 2, 00:20:02.724 "num_base_bdevs_operational": 2, 00:20:02.724 "base_bdevs_list": [ 00:20:02.724 { 00:20:02.724 "name": null, 00:20:02.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.724 "is_configured": false, 00:20:02.724 "data_offset": 0, 00:20:02.724 "data_size": 65536 00:20:02.724 }, 00:20:02.724 { 00:20:02.724 "name": "BaseBdev2", 00:20:02.724 "uuid": "614fb298-988d-43f2-b2a1-3efd47063cbd", 00:20:02.724 "is_configured": true, 00:20:02.724 "data_offset": 0, 00:20:02.724 "data_size": 65536 00:20:02.724 }, 00:20:02.724 { 00:20:02.724 "name": "BaseBdev3", 00:20:02.724 "uuid": "b9d40719-a978-4221-bce6-cc6fe4fada3e", 00:20:02.724 "is_configured": true, 00:20:02.724 "data_offset": 0, 00:20:02.724 "data_size": 65536 00:20:02.724 } 00:20:02.724 ] 00:20:02.724 }' 00:20:02.724 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:02.724 16:36:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:03.291 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:03.291 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.291 16:36:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:03.549 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:03.549 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:03.549 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:03.549 [2024-07-24 16:37:00.406962] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:03.807 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:03.807 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:03.807 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.807 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:04.066 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:04.066 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:04.066 16:37:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:04.324 [2024-07-24 16:37:00.997538] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:04.324 [2024-07-24 16:37:00.997641] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.324 [2024-07-24 16:37:01.132988] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.324 [2024-07-24 16:37:01.133040] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.324 [2024-07-24 16:37:01.133059] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:20:04.324 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:04.324 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:04.324 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.324 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:04.583 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:04.583 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:04.583 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:04.583 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:04.583 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:04.583 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:04.842 BaseBdev2 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:04.842 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.100 16:37:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:05.359 [ 00:20:05.359 { 00:20:05.359 "name": "BaseBdev2", 00:20:05.359 "aliases": [ 00:20:05.359 "b9101901-27c4-49a3-9c24-0a078f0a5bcb" 00:20:05.359 ], 00:20:05.359 "product_name": "Malloc disk", 00:20:05.359 "block_size": 512, 00:20:05.359 "num_blocks": 65536, 00:20:05.359 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:05.359 "assigned_rate_limits": { 00:20:05.359 "rw_ios_per_sec": 0, 00:20:05.359 "rw_mbytes_per_sec": 0, 00:20:05.359 "r_mbytes_per_sec": 0, 00:20:05.359 "w_mbytes_per_sec": 0 00:20:05.359 }, 00:20:05.359 "claimed": false, 00:20:05.359 "zoned": false, 00:20:05.359 "supported_io_types": { 00:20:05.359 "read": true, 00:20:05.359 "write": true, 00:20:05.359 "unmap": true, 00:20:05.359 "flush": true, 00:20:05.359 "reset": true, 00:20:05.359 "nvme_admin": false, 00:20:05.359 "nvme_io": false, 00:20:05.359 "nvme_io_md": false, 00:20:05.359 "write_zeroes": true, 00:20:05.359 "zcopy": true, 00:20:05.359 "get_zone_info": false, 00:20:05.359 "zone_management": false, 00:20:05.359 "zone_append": false, 00:20:05.359 "compare": false, 00:20:05.359 "compare_and_write": false, 00:20:05.359 "abort": true, 00:20:05.359 "seek_hole": false, 00:20:05.359 "seek_data": false, 00:20:05.359 "copy": true, 00:20:05.359 "nvme_iov_md": false 00:20:05.359 }, 00:20:05.359 "memory_domains": [ 00:20:05.359 { 00:20:05.359 "dma_device_id": "system", 00:20:05.359 "dma_device_type": 1 00:20:05.359 }, 00:20:05.359 { 00:20:05.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.359 "dma_device_type": 2 00:20:05.359 } 00:20:05.359 ], 00:20:05.359 "driver_specific": {} 00:20:05.359 } 00:20:05.359 ] 00:20:05.359 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:05.359 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:05.359 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:05.359 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:05.618 BaseBdev3 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:05.618 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:05.876 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:06.134 [ 00:20:06.134 { 00:20:06.134 "name": "BaseBdev3", 00:20:06.134 "aliases": [ 00:20:06.134 "c9d74aed-e9f7-469a-845b-6ff1bfa81814" 00:20:06.134 ], 00:20:06.134 "product_name": "Malloc disk", 00:20:06.134 "block_size": 512, 00:20:06.134 "num_blocks": 65536, 00:20:06.134 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:06.134 "assigned_rate_limits": { 00:20:06.134 "rw_ios_per_sec": 0, 00:20:06.134 "rw_mbytes_per_sec": 0, 00:20:06.134 "r_mbytes_per_sec": 0, 00:20:06.134 "w_mbytes_per_sec": 0 00:20:06.134 }, 00:20:06.134 "claimed": false, 00:20:06.134 "zoned": false, 00:20:06.134 "supported_io_types": { 00:20:06.134 "read": true, 00:20:06.134 "write": true, 00:20:06.134 "unmap": true, 00:20:06.134 "flush": true, 00:20:06.134 "reset": true, 00:20:06.134 "nvme_admin": false, 00:20:06.134 "nvme_io": false, 00:20:06.134 "nvme_io_md": false, 00:20:06.134 "write_zeroes": true, 00:20:06.134 "zcopy": true, 00:20:06.134 "get_zone_info": false, 00:20:06.134 "zone_management": false, 00:20:06.134 "zone_append": false, 00:20:06.134 "compare": false, 00:20:06.134 "compare_and_write": false, 00:20:06.134 "abort": true, 00:20:06.134 "seek_hole": false, 00:20:06.134 "seek_data": false, 00:20:06.134 "copy": true, 00:20:06.134 "nvme_iov_md": false 00:20:06.134 }, 00:20:06.134 "memory_domains": [ 00:20:06.134 { 00:20:06.134 "dma_device_id": "system", 00:20:06.134 "dma_device_type": 1 00:20:06.134 }, 00:20:06.134 { 00:20:06.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.134 "dma_device_type": 2 00:20:06.134 } 00:20:06.134 ], 00:20:06.134 "driver_specific": {} 00:20:06.134 } 00:20:06.134 ] 00:20:06.134 16:37:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:06.134 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:06.134 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:06.134 16:37:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:06.134 [2024-07-24 16:37:02.986370] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:06.134 [2024-07-24 16:37:02.986419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:06.134 [2024-07-24 16:37:02.986447] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:06.135 [2024-07-24 16:37:02.988755] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.393 "name": "Existed_Raid", 00:20:06.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.393 "strip_size_kb": 0, 00:20:06.393 "state": "configuring", 00:20:06.393 "raid_level": "raid1", 00:20:06.393 "superblock": false, 00:20:06.393 "num_base_bdevs": 3, 00:20:06.393 "num_base_bdevs_discovered": 2, 00:20:06.393 "num_base_bdevs_operational": 3, 00:20:06.393 "base_bdevs_list": [ 00:20:06.393 { 00:20:06.393 "name": "BaseBdev1", 00:20:06.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.393 "is_configured": false, 00:20:06.393 "data_offset": 0, 00:20:06.393 "data_size": 0 00:20:06.393 }, 00:20:06.393 { 00:20:06.393 "name": "BaseBdev2", 00:20:06.393 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:06.393 "is_configured": true, 00:20:06.393 "data_offset": 0, 00:20:06.393 "data_size": 65536 00:20:06.393 }, 00:20:06.393 { 00:20:06.393 "name": "BaseBdev3", 00:20:06.393 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:06.393 "is_configured": true, 00:20:06.393 "data_offset": 0, 00:20:06.393 "data_size": 65536 00:20:06.393 } 00:20:06.393 ] 00:20:06.393 }' 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.393 16:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.960 16:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:07.219 [2024-07-24 16:37:04.017155] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.219 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:07.477 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.477 "name": "Existed_Raid", 00:20:07.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.477 "strip_size_kb": 0, 00:20:07.477 "state": "configuring", 00:20:07.477 "raid_level": "raid1", 00:20:07.477 "superblock": false, 00:20:07.477 "num_base_bdevs": 3, 00:20:07.477 "num_base_bdevs_discovered": 1, 00:20:07.477 "num_base_bdevs_operational": 3, 00:20:07.477 "base_bdevs_list": [ 00:20:07.477 { 00:20:07.477 "name": "BaseBdev1", 00:20:07.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.477 "is_configured": false, 00:20:07.477 "data_offset": 0, 00:20:07.477 "data_size": 0 00:20:07.477 }, 00:20:07.477 { 00:20:07.477 "name": null, 00:20:07.477 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:07.477 "is_configured": false, 00:20:07.477 "data_offset": 0, 00:20:07.477 "data_size": 65536 00:20:07.477 }, 00:20:07.477 { 00:20:07.477 "name": "BaseBdev3", 00:20:07.477 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:07.477 "is_configured": true, 00:20:07.477 "data_offset": 0, 00:20:07.477 "data_size": 65536 00:20:07.477 } 00:20:07.477 ] 00:20:07.477 }' 00:20:07.477 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.477 16:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.045 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.045 16:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:08.304 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:08.304 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:08.562 [2024-07-24 16:37:05.337149] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:08.562 BaseBdev1 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:08.562 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.819 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.077 [ 00:20:09.077 { 00:20:09.077 "name": "BaseBdev1", 00:20:09.077 "aliases": [ 00:20:09.077 "94cfa6e6-97b2-488d-b2ef-ed27ae81012f" 00:20:09.077 ], 00:20:09.077 "product_name": "Malloc disk", 00:20:09.077 "block_size": 512, 00:20:09.077 "num_blocks": 65536, 00:20:09.077 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:09.077 "assigned_rate_limits": { 00:20:09.077 "rw_ios_per_sec": 0, 00:20:09.077 "rw_mbytes_per_sec": 0, 00:20:09.077 "r_mbytes_per_sec": 0, 00:20:09.077 "w_mbytes_per_sec": 0 00:20:09.077 }, 00:20:09.077 "claimed": true, 00:20:09.077 "claim_type": "exclusive_write", 00:20:09.077 "zoned": false, 00:20:09.077 "supported_io_types": { 00:20:09.077 "read": true, 00:20:09.077 "write": true, 00:20:09.077 "unmap": true, 00:20:09.077 "flush": true, 00:20:09.077 "reset": true, 00:20:09.077 "nvme_admin": false, 00:20:09.077 "nvme_io": false, 00:20:09.077 "nvme_io_md": false, 00:20:09.077 "write_zeroes": true, 00:20:09.077 "zcopy": true, 00:20:09.077 "get_zone_info": false, 00:20:09.077 "zone_management": false, 00:20:09.077 "zone_append": false, 00:20:09.077 "compare": false, 00:20:09.077 "compare_and_write": false, 00:20:09.077 "abort": true, 00:20:09.077 "seek_hole": false, 00:20:09.077 "seek_data": false, 00:20:09.077 "copy": true, 00:20:09.077 "nvme_iov_md": false 00:20:09.077 }, 00:20:09.077 "memory_domains": [ 00:20:09.077 { 00:20:09.077 "dma_device_id": "system", 00:20:09.077 "dma_device_type": 1 00:20:09.077 }, 00:20:09.077 { 00:20:09.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.077 "dma_device_type": 2 00:20:09.077 } 00:20:09.077 ], 00:20:09.077 "driver_specific": {} 00:20:09.077 } 00:20:09.077 ] 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.077 16:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.335 16:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.335 "name": "Existed_Raid", 00:20:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.335 "strip_size_kb": 0, 00:20:09.335 "state": "configuring", 00:20:09.335 "raid_level": "raid1", 00:20:09.335 "superblock": false, 00:20:09.335 "num_base_bdevs": 3, 00:20:09.335 "num_base_bdevs_discovered": 2, 00:20:09.335 "num_base_bdevs_operational": 3, 00:20:09.335 "base_bdevs_list": [ 00:20:09.335 { 00:20:09.335 "name": "BaseBdev1", 00:20:09.335 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:09.335 "is_configured": true, 00:20:09.335 "data_offset": 0, 00:20:09.335 "data_size": 65536 00:20:09.335 }, 00:20:09.335 { 00:20:09.335 "name": null, 00:20:09.335 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:09.335 "is_configured": false, 00:20:09.335 "data_offset": 0, 00:20:09.335 "data_size": 65536 00:20:09.335 }, 00:20:09.335 { 00:20:09.335 "name": "BaseBdev3", 00:20:09.335 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:09.335 "is_configured": true, 00:20:09.335 "data_offset": 0, 00:20:09.335 "data_size": 65536 00:20:09.335 } 00:20:09.335 ] 00:20:09.335 }' 00:20:09.335 16:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.335 16:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.900 16:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:09.900 16:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.158 16:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:10.158 16:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:10.158 [2024-07-24 16:37:07.013720] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.416 "name": "Existed_Raid", 00:20:10.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.416 "strip_size_kb": 0, 00:20:10.416 "state": "configuring", 00:20:10.416 "raid_level": "raid1", 00:20:10.416 "superblock": false, 00:20:10.416 "num_base_bdevs": 3, 00:20:10.416 "num_base_bdevs_discovered": 1, 00:20:10.416 "num_base_bdevs_operational": 3, 00:20:10.416 "base_bdevs_list": [ 00:20:10.416 { 00:20:10.416 "name": "BaseBdev1", 00:20:10.416 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:10.416 "is_configured": true, 00:20:10.416 "data_offset": 0, 00:20:10.416 "data_size": 65536 00:20:10.416 }, 00:20:10.416 { 00:20:10.416 "name": null, 00:20:10.416 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:10.416 "is_configured": false, 00:20:10.416 "data_offset": 0, 00:20:10.416 "data_size": 65536 00:20:10.416 }, 00:20:10.416 { 00:20:10.416 "name": null, 00:20:10.416 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:10.416 "is_configured": false, 00:20:10.416 "data_offset": 0, 00:20:10.416 "data_size": 65536 00:20:10.416 } 00:20:10.416 ] 00:20:10.416 }' 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.416 16:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.982 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.982 16:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:11.240 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:11.240 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:11.498 [2024-07-24 16:37:08.261137] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.498 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.755 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.755 "name": "Existed_Raid", 00:20:11.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.755 "strip_size_kb": 0, 00:20:11.755 "state": "configuring", 00:20:11.755 "raid_level": "raid1", 00:20:11.755 "superblock": false, 00:20:11.755 "num_base_bdevs": 3, 00:20:11.755 "num_base_bdevs_discovered": 2, 00:20:11.755 "num_base_bdevs_operational": 3, 00:20:11.755 "base_bdevs_list": [ 00:20:11.755 { 00:20:11.755 "name": "BaseBdev1", 00:20:11.755 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:11.755 "is_configured": true, 00:20:11.755 "data_offset": 0, 00:20:11.755 "data_size": 65536 00:20:11.755 }, 00:20:11.755 { 00:20:11.755 "name": null, 00:20:11.755 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:11.755 "is_configured": false, 00:20:11.755 "data_offset": 0, 00:20:11.755 "data_size": 65536 00:20:11.755 }, 00:20:11.755 { 00:20:11.755 "name": "BaseBdev3", 00:20:11.755 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:11.755 "is_configured": true, 00:20:11.755 "data_offset": 0, 00:20:11.755 "data_size": 65536 00:20:11.755 } 00:20:11.755 ] 00:20:11.755 }' 00:20:11.755 16:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.756 16:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.320 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.320 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:12.578 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:12.578 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:12.842 [2024-07-24 16:37:09.500491] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.842 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.126 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.126 "name": "Existed_Raid", 00:20:13.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.126 "strip_size_kb": 0, 00:20:13.126 "state": "configuring", 00:20:13.126 "raid_level": "raid1", 00:20:13.126 "superblock": false, 00:20:13.126 "num_base_bdevs": 3, 00:20:13.126 "num_base_bdevs_discovered": 1, 00:20:13.127 "num_base_bdevs_operational": 3, 00:20:13.127 "base_bdevs_list": [ 00:20:13.127 { 00:20:13.127 "name": null, 00:20:13.127 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:13.127 "is_configured": false, 00:20:13.127 "data_offset": 0, 00:20:13.127 "data_size": 65536 00:20:13.127 }, 00:20:13.127 { 00:20:13.127 "name": null, 00:20:13.127 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:13.127 "is_configured": false, 00:20:13.127 "data_offset": 0, 00:20:13.127 "data_size": 65536 00:20:13.127 }, 00:20:13.127 { 00:20:13.127 "name": "BaseBdev3", 00:20:13.127 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:13.127 "is_configured": true, 00:20:13.127 "data_offset": 0, 00:20:13.127 "data_size": 65536 00:20:13.127 } 00:20:13.127 ] 00:20:13.127 }' 00:20:13.127 16:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.127 16:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.694 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.694 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:13.953 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:13.953 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:14.212 [2024-07-24 16:37:10.879610] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.212 16:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.470 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.470 "name": "Existed_Raid", 00:20:14.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.470 "strip_size_kb": 0, 00:20:14.470 "state": "configuring", 00:20:14.470 "raid_level": "raid1", 00:20:14.470 "superblock": false, 00:20:14.470 "num_base_bdevs": 3, 00:20:14.470 "num_base_bdevs_discovered": 2, 00:20:14.470 "num_base_bdevs_operational": 3, 00:20:14.470 "base_bdevs_list": [ 00:20:14.470 { 00:20:14.470 "name": null, 00:20:14.470 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:14.470 "is_configured": false, 00:20:14.470 "data_offset": 0, 00:20:14.470 "data_size": 65536 00:20:14.470 }, 00:20:14.470 { 00:20:14.470 "name": "BaseBdev2", 00:20:14.470 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:14.470 "is_configured": true, 00:20:14.470 "data_offset": 0, 00:20:14.470 "data_size": 65536 00:20:14.470 }, 00:20:14.470 { 00:20:14.470 "name": "BaseBdev3", 00:20:14.470 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:14.470 "is_configured": true, 00:20:14.470 "data_offset": 0, 00:20:14.470 "data_size": 65536 00:20:14.470 } 00:20:14.470 ] 00:20:14.470 }' 00:20:14.470 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.470 16:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.037 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.037 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:15.296 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:15.296 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.296 16:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:15.554 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 94cfa6e6-97b2-488d-b2ef-ed27ae81012f 00:20:15.813 [2024-07-24 16:37:12.425386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:15.813 [2024-07-24 16:37:12.425436] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:20:15.813 [2024-07-24 16:37:12.425449] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:15.813 [2024-07-24 16:37:12.425764] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:20:15.813 [2024-07-24 16:37:12.425969] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:20:15.813 [2024-07-24 16:37:12.425988] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:20:15.813 [2024-07-24 16:37:12.426298] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.813 NewBaseBdev 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.813 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:16.072 [ 00:20:16.072 { 00:20:16.072 "name": "NewBaseBdev", 00:20:16.072 "aliases": [ 00:20:16.072 "94cfa6e6-97b2-488d-b2ef-ed27ae81012f" 00:20:16.072 ], 00:20:16.072 "product_name": "Malloc disk", 00:20:16.072 "block_size": 512, 00:20:16.072 "num_blocks": 65536, 00:20:16.072 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:16.072 "assigned_rate_limits": { 00:20:16.072 "rw_ios_per_sec": 0, 00:20:16.072 "rw_mbytes_per_sec": 0, 00:20:16.072 "r_mbytes_per_sec": 0, 00:20:16.072 "w_mbytes_per_sec": 0 00:20:16.072 }, 00:20:16.072 "claimed": true, 00:20:16.072 "claim_type": "exclusive_write", 00:20:16.072 "zoned": false, 00:20:16.072 "supported_io_types": { 00:20:16.072 "read": true, 00:20:16.072 "write": true, 00:20:16.072 "unmap": true, 00:20:16.072 "flush": true, 00:20:16.072 "reset": true, 00:20:16.072 "nvme_admin": false, 00:20:16.072 "nvme_io": false, 00:20:16.072 "nvme_io_md": false, 00:20:16.072 "write_zeroes": true, 00:20:16.072 "zcopy": true, 00:20:16.072 "get_zone_info": false, 00:20:16.072 "zone_management": false, 00:20:16.072 "zone_append": false, 00:20:16.072 "compare": false, 00:20:16.072 "compare_and_write": false, 00:20:16.072 "abort": true, 00:20:16.072 "seek_hole": false, 00:20:16.072 "seek_data": false, 00:20:16.072 "copy": true, 00:20:16.072 "nvme_iov_md": false 00:20:16.072 }, 00:20:16.072 "memory_domains": [ 00:20:16.072 { 00:20:16.072 "dma_device_id": "system", 00:20:16.072 "dma_device_type": 1 00:20:16.072 }, 00:20:16.072 { 00:20:16.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.072 "dma_device_type": 2 00:20:16.072 } 00:20:16.072 ], 00:20:16.072 "driver_specific": {} 00:20:16.072 } 00:20:16.072 ] 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.072 16:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.330 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.330 "name": "Existed_Raid", 00:20:16.330 "uuid": "1a0de15b-c281-42e5-9d01-5de4b67f5667", 00:20:16.330 "strip_size_kb": 0, 00:20:16.330 "state": "online", 00:20:16.330 "raid_level": "raid1", 00:20:16.330 "superblock": false, 00:20:16.330 "num_base_bdevs": 3, 00:20:16.330 "num_base_bdevs_discovered": 3, 00:20:16.330 "num_base_bdevs_operational": 3, 00:20:16.330 "base_bdevs_list": [ 00:20:16.330 { 00:20:16.330 "name": "NewBaseBdev", 00:20:16.330 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:16.330 "is_configured": true, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "name": "BaseBdev2", 00:20:16.330 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:16.330 "is_configured": true, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "name": "BaseBdev3", 00:20:16.330 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:16.330 "is_configured": true, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 } 00:20:16.330 ] 00:20:16.330 }' 00:20:16.330 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.330 16:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:16.907 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:17.165 [2024-07-24 16:37:13.917879] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.165 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:17.165 "name": "Existed_Raid", 00:20:17.165 "aliases": [ 00:20:17.165 "1a0de15b-c281-42e5-9d01-5de4b67f5667" 00:20:17.165 ], 00:20:17.165 "product_name": "Raid Volume", 00:20:17.165 "block_size": 512, 00:20:17.165 "num_blocks": 65536, 00:20:17.165 "uuid": "1a0de15b-c281-42e5-9d01-5de4b67f5667", 00:20:17.165 "assigned_rate_limits": { 00:20:17.165 "rw_ios_per_sec": 0, 00:20:17.165 "rw_mbytes_per_sec": 0, 00:20:17.165 "r_mbytes_per_sec": 0, 00:20:17.165 "w_mbytes_per_sec": 0 00:20:17.165 }, 00:20:17.165 "claimed": false, 00:20:17.165 "zoned": false, 00:20:17.165 "supported_io_types": { 00:20:17.165 "read": true, 00:20:17.165 "write": true, 00:20:17.165 "unmap": false, 00:20:17.165 "flush": false, 00:20:17.165 "reset": true, 00:20:17.165 "nvme_admin": false, 00:20:17.165 "nvme_io": false, 00:20:17.165 "nvme_io_md": false, 00:20:17.165 "write_zeroes": true, 00:20:17.165 "zcopy": false, 00:20:17.165 "get_zone_info": false, 00:20:17.165 "zone_management": false, 00:20:17.165 "zone_append": false, 00:20:17.165 "compare": false, 00:20:17.165 "compare_and_write": false, 00:20:17.165 "abort": false, 00:20:17.165 "seek_hole": false, 00:20:17.165 "seek_data": false, 00:20:17.165 "copy": false, 00:20:17.165 "nvme_iov_md": false 00:20:17.165 }, 00:20:17.165 "memory_domains": [ 00:20:17.165 { 00:20:17.165 "dma_device_id": "system", 00:20:17.165 "dma_device_type": 1 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.165 "dma_device_type": 2 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "dma_device_id": "system", 00:20:17.165 "dma_device_type": 1 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.165 "dma_device_type": 2 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "dma_device_id": "system", 00:20:17.165 "dma_device_type": 1 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.165 "dma_device_type": 2 00:20:17.165 } 00:20:17.165 ], 00:20:17.165 "driver_specific": { 00:20:17.165 "raid": { 00:20:17.165 "uuid": "1a0de15b-c281-42e5-9d01-5de4b67f5667", 00:20:17.165 "strip_size_kb": 0, 00:20:17.165 "state": "online", 00:20:17.165 "raid_level": "raid1", 00:20:17.165 "superblock": false, 00:20:17.165 "num_base_bdevs": 3, 00:20:17.165 "num_base_bdevs_discovered": 3, 00:20:17.165 "num_base_bdevs_operational": 3, 00:20:17.165 "base_bdevs_list": [ 00:20:17.165 { 00:20:17.165 "name": "NewBaseBdev", 00:20:17.165 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:17.165 "is_configured": true, 00:20:17.165 "data_offset": 0, 00:20:17.165 "data_size": 65536 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "name": "BaseBdev2", 00:20:17.165 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:17.165 "is_configured": true, 00:20:17.165 "data_offset": 0, 00:20:17.165 "data_size": 65536 00:20:17.165 }, 00:20:17.165 { 00:20:17.165 "name": "BaseBdev3", 00:20:17.165 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:17.165 "is_configured": true, 00:20:17.165 "data_offset": 0, 00:20:17.165 "data_size": 65536 00:20:17.165 } 00:20:17.165 ] 00:20:17.165 } 00:20:17.165 } 00:20:17.165 }' 00:20:17.165 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.165 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:17.165 BaseBdev2 00:20:17.165 BaseBdev3' 00:20:17.165 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:17.165 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:17.165 16:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.424 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.424 "name": "NewBaseBdev", 00:20:17.424 "aliases": [ 00:20:17.424 "94cfa6e6-97b2-488d-b2ef-ed27ae81012f" 00:20:17.424 ], 00:20:17.424 "product_name": "Malloc disk", 00:20:17.424 "block_size": 512, 00:20:17.424 "num_blocks": 65536, 00:20:17.424 "uuid": "94cfa6e6-97b2-488d-b2ef-ed27ae81012f", 00:20:17.424 "assigned_rate_limits": { 00:20:17.424 "rw_ios_per_sec": 0, 00:20:17.424 "rw_mbytes_per_sec": 0, 00:20:17.424 "r_mbytes_per_sec": 0, 00:20:17.424 "w_mbytes_per_sec": 0 00:20:17.424 }, 00:20:17.424 "claimed": true, 00:20:17.424 "claim_type": "exclusive_write", 00:20:17.424 "zoned": false, 00:20:17.424 "supported_io_types": { 00:20:17.424 "read": true, 00:20:17.424 "write": true, 00:20:17.424 "unmap": true, 00:20:17.424 "flush": true, 00:20:17.424 "reset": true, 00:20:17.424 "nvme_admin": false, 00:20:17.424 "nvme_io": false, 00:20:17.424 "nvme_io_md": false, 00:20:17.424 "write_zeroes": true, 00:20:17.425 "zcopy": true, 00:20:17.425 "get_zone_info": false, 00:20:17.425 "zone_management": false, 00:20:17.425 "zone_append": false, 00:20:17.425 "compare": false, 00:20:17.425 "compare_and_write": false, 00:20:17.425 "abort": true, 00:20:17.425 "seek_hole": false, 00:20:17.425 "seek_data": false, 00:20:17.425 "copy": true, 00:20:17.425 "nvme_iov_md": false 00:20:17.425 }, 00:20:17.425 "memory_domains": [ 00:20:17.425 { 00:20:17.425 "dma_device_id": "system", 00:20:17.425 "dma_device_type": 1 00:20:17.425 }, 00:20:17.425 { 00:20:17.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.425 "dma_device_type": 2 00:20:17.425 } 00:20:17.425 ], 00:20:17.425 "driver_specific": {} 00:20:17.425 }' 00:20:17.425 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.425 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.684 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:17.943 "name": "BaseBdev2", 00:20:17.943 "aliases": [ 00:20:17.943 "b9101901-27c4-49a3-9c24-0a078f0a5bcb" 00:20:17.943 ], 00:20:17.943 "product_name": "Malloc disk", 00:20:17.943 "block_size": 512, 00:20:17.943 "num_blocks": 65536, 00:20:17.943 "uuid": "b9101901-27c4-49a3-9c24-0a078f0a5bcb", 00:20:17.943 "assigned_rate_limits": { 00:20:17.943 "rw_ios_per_sec": 0, 00:20:17.943 "rw_mbytes_per_sec": 0, 00:20:17.943 "r_mbytes_per_sec": 0, 00:20:17.943 "w_mbytes_per_sec": 0 00:20:17.943 }, 00:20:17.943 "claimed": true, 00:20:17.943 "claim_type": "exclusive_write", 00:20:17.943 "zoned": false, 00:20:17.943 "supported_io_types": { 00:20:17.943 "read": true, 00:20:17.943 "write": true, 00:20:17.943 "unmap": true, 00:20:17.943 "flush": true, 00:20:17.943 "reset": true, 00:20:17.943 "nvme_admin": false, 00:20:17.943 "nvme_io": false, 00:20:17.943 "nvme_io_md": false, 00:20:17.943 "write_zeroes": true, 00:20:17.943 "zcopy": true, 00:20:17.943 "get_zone_info": false, 00:20:17.943 "zone_management": false, 00:20:17.943 "zone_append": false, 00:20:17.943 "compare": false, 00:20:17.943 "compare_and_write": false, 00:20:17.943 "abort": true, 00:20:17.943 "seek_hole": false, 00:20:17.943 "seek_data": false, 00:20:17.943 "copy": true, 00:20:17.943 "nvme_iov_md": false 00:20:17.943 }, 00:20:17.943 "memory_domains": [ 00:20:17.943 { 00:20:17.943 "dma_device_id": "system", 00:20:17.943 "dma_device_type": 1 00:20:17.943 }, 00:20:17.943 { 00:20:17.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.943 "dma_device_type": 2 00:20:17.943 } 00:20:17.943 ], 00:20:17.943 "driver_specific": {} 00:20:17.943 }' 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:17.943 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.202 16:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.202 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.202 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.202 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:18.202 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.460 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.460 "name": "BaseBdev3", 00:20:18.460 "aliases": [ 00:20:18.460 "c9d74aed-e9f7-469a-845b-6ff1bfa81814" 00:20:18.460 ], 00:20:18.460 "product_name": "Malloc disk", 00:20:18.460 "block_size": 512, 00:20:18.460 "num_blocks": 65536, 00:20:18.460 "uuid": "c9d74aed-e9f7-469a-845b-6ff1bfa81814", 00:20:18.460 "assigned_rate_limits": { 00:20:18.460 "rw_ios_per_sec": 0, 00:20:18.460 "rw_mbytes_per_sec": 0, 00:20:18.460 "r_mbytes_per_sec": 0, 00:20:18.460 "w_mbytes_per_sec": 0 00:20:18.460 }, 00:20:18.460 "claimed": true, 00:20:18.460 "claim_type": "exclusive_write", 00:20:18.460 "zoned": false, 00:20:18.460 "supported_io_types": { 00:20:18.460 "read": true, 00:20:18.460 "write": true, 00:20:18.460 "unmap": true, 00:20:18.460 "flush": true, 00:20:18.460 "reset": true, 00:20:18.460 "nvme_admin": false, 00:20:18.460 "nvme_io": false, 00:20:18.460 "nvme_io_md": false, 00:20:18.460 "write_zeroes": true, 00:20:18.460 "zcopy": true, 00:20:18.460 "get_zone_info": false, 00:20:18.460 "zone_management": false, 00:20:18.460 "zone_append": false, 00:20:18.460 "compare": false, 00:20:18.460 "compare_and_write": false, 00:20:18.460 "abort": true, 00:20:18.460 "seek_hole": false, 00:20:18.460 "seek_data": false, 00:20:18.460 "copy": true, 00:20:18.460 "nvme_iov_md": false 00:20:18.460 }, 00:20:18.460 "memory_domains": [ 00:20:18.460 { 00:20:18.460 "dma_device_id": "system", 00:20:18.460 "dma_device_type": 1 00:20:18.460 }, 00:20:18.460 { 00:20:18.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.460 "dma_device_type": 2 00:20:18.460 } 00:20:18.460 ], 00:20:18.460 "driver_specific": {} 00:20:18.460 }' 00:20:18.460 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.460 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.719 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:18.977 [2024-07-24 16:37:15.786539] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:18.977 [2024-07-24 16:37:15.786574] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.977 [2024-07-24 16:37:15.786655] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.977 [2024-07-24 16:37:15.786995] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.977 [2024-07-24 16:37:15.787013] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:20:18.977 16:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1660290 00:20:18.977 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1660290 ']' 00:20:18.977 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1660290 00:20:18.977 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:18.977 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.977 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1660290 00:20:19.235 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:19.235 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:19.235 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1660290' 00:20:19.235 killing process with pid 1660290 00:20:19.235 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1660290 00:20:19.235 [2024-07-24 16:37:15.864522] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.235 16:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1660290 00:20:19.493 [2024-07-24 16:37:16.196673] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:21.396 00:20:21.396 real 0m29.428s 00:20:21.396 user 0m51.377s 00:20:21.396 sys 0m5.133s 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.396 ************************************ 00:20:21.396 END TEST raid_state_function_test 00:20:21.396 ************************************ 00:20:21.396 16:37:17 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:21.396 16:37:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:21.396 16:37:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:21.396 16:37:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.396 ************************************ 00:20:21.396 START TEST raid_state_function_test_sb 00:20:21.396 ************************************ 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:21.396 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1665784 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1665784' 00:20:21.397 Process raid pid: 1665784 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1665784 /var/tmp/spdk-raid.sock 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1665784 ']' 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:21.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.397 16:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.397 [2024-07-24 16:37:18.087329] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:20:21.397 [2024-07-24 16:37:18.087444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:21.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:21.397 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:21.655 [2024-07-24 16:37:18.314045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.913 [2024-07-24 16:37:18.610021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.193 [2024-07-24 16:37:18.957899] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.193 [2024-07-24 16:37:18.957936] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:22.452 [2024-07-24 16:37:19.292702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.452 [2024-07-24 16:37:19.292759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.452 [2024-07-24 16:37:19.292774] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.452 [2024-07-24 16:37:19.292791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.452 [2024-07-24 16:37:19.292803] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:22.452 [2024-07-24 16:37:19.292819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.452 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.711 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:22.711 "name": "Existed_Raid", 00:20:22.711 "uuid": "43f80a24-829d-45ac-b091-a04b07b816b4", 00:20:22.711 "strip_size_kb": 0, 00:20:22.711 "state": "configuring", 00:20:22.711 "raid_level": "raid1", 00:20:22.711 "superblock": true, 00:20:22.711 "num_base_bdevs": 3, 00:20:22.711 "num_base_bdevs_discovered": 0, 00:20:22.711 "num_base_bdevs_operational": 3, 00:20:22.711 "base_bdevs_list": [ 00:20:22.711 { 00:20:22.711 "name": "BaseBdev1", 00:20:22.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.711 "is_configured": false, 00:20:22.711 "data_offset": 0, 00:20:22.711 "data_size": 0 00:20:22.711 }, 00:20:22.711 { 00:20:22.711 "name": "BaseBdev2", 00:20:22.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.711 "is_configured": false, 00:20:22.711 "data_offset": 0, 00:20:22.711 "data_size": 0 00:20:22.711 }, 00:20:22.711 { 00:20:22.711 "name": "BaseBdev3", 00:20:22.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.711 "is_configured": false, 00:20:22.711 "data_offset": 0, 00:20:22.711 "data_size": 0 00:20:22.711 } 00:20:22.711 ] 00:20:22.711 }' 00:20:22.711 16:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:22.711 16:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.277 16:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:23.534 [2024-07-24 16:37:20.331353] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.534 [2024-07-24 16:37:20.331392] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:20:23.534 16:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:23.792 [2024-07-24 16:37:20.568021] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:23.792 [2024-07-24 16:37:20.568067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:23.792 [2024-07-24 16:37:20.568080] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.792 [2024-07-24 16:37:20.568100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.792 [2024-07-24 16:37:20.568111] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:23.792 [2024-07-24 16:37:20.568127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:23.792 16:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:24.050 [2024-07-24 16:37:20.846914] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.050 BaseBdev1 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:24.050 16:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:24.308 16:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:24.566 [ 00:20:24.566 { 00:20:24.566 "name": "BaseBdev1", 00:20:24.566 "aliases": [ 00:20:24.566 "89c1906a-b439-41c1-aa5f-4b64aec0e17e" 00:20:24.566 ], 00:20:24.566 "product_name": "Malloc disk", 00:20:24.566 "block_size": 512, 00:20:24.566 "num_blocks": 65536, 00:20:24.566 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:24.566 "assigned_rate_limits": { 00:20:24.566 "rw_ios_per_sec": 0, 00:20:24.566 "rw_mbytes_per_sec": 0, 00:20:24.566 "r_mbytes_per_sec": 0, 00:20:24.566 "w_mbytes_per_sec": 0 00:20:24.566 }, 00:20:24.566 "claimed": true, 00:20:24.566 "claim_type": "exclusive_write", 00:20:24.566 "zoned": false, 00:20:24.566 "supported_io_types": { 00:20:24.566 "read": true, 00:20:24.566 "write": true, 00:20:24.566 "unmap": true, 00:20:24.566 "flush": true, 00:20:24.566 "reset": true, 00:20:24.566 "nvme_admin": false, 00:20:24.566 "nvme_io": false, 00:20:24.566 "nvme_io_md": false, 00:20:24.566 "write_zeroes": true, 00:20:24.566 "zcopy": true, 00:20:24.566 "get_zone_info": false, 00:20:24.566 "zone_management": false, 00:20:24.566 "zone_append": false, 00:20:24.566 "compare": false, 00:20:24.566 "compare_and_write": false, 00:20:24.566 "abort": true, 00:20:24.566 "seek_hole": false, 00:20:24.566 "seek_data": false, 00:20:24.566 "copy": true, 00:20:24.566 "nvme_iov_md": false 00:20:24.566 }, 00:20:24.566 "memory_domains": [ 00:20:24.566 { 00:20:24.566 "dma_device_id": "system", 00:20:24.566 "dma_device_type": 1 00:20:24.566 }, 00:20:24.566 { 00:20:24.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.566 "dma_device_type": 2 00:20:24.566 } 00:20:24.566 ], 00:20:24.566 "driver_specific": {} 00:20:24.566 } 00:20:24.566 ] 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.566 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.567 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.567 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.567 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.567 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.567 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.825 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.825 "name": "Existed_Raid", 00:20:24.825 "uuid": "22b06bbb-22bd-472f-a2c0-b71d1a1cb7eb", 00:20:24.825 "strip_size_kb": 0, 00:20:24.825 "state": "configuring", 00:20:24.825 "raid_level": "raid1", 00:20:24.825 "superblock": true, 00:20:24.825 "num_base_bdevs": 3, 00:20:24.825 "num_base_bdevs_discovered": 1, 00:20:24.825 "num_base_bdevs_operational": 3, 00:20:24.825 "base_bdevs_list": [ 00:20:24.825 { 00:20:24.825 "name": "BaseBdev1", 00:20:24.825 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:24.825 "is_configured": true, 00:20:24.825 "data_offset": 2048, 00:20:24.825 "data_size": 63488 00:20:24.825 }, 00:20:24.825 { 00:20:24.825 "name": "BaseBdev2", 00:20:24.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.825 "is_configured": false, 00:20:24.825 "data_offset": 0, 00:20:24.825 "data_size": 0 00:20:24.825 }, 00:20:24.825 { 00:20:24.825 "name": "BaseBdev3", 00:20:24.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.825 "is_configured": false, 00:20:24.825 "data_offset": 0, 00:20:24.825 "data_size": 0 00:20:24.825 } 00:20:24.825 ] 00:20:24.825 }' 00:20:24.825 16:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.825 16:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.390 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:25.648 [2024-07-24 16:37:22.330985] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:25.648 [2024-07-24 16:37:22.331038] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:20:25.648 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:25.906 [2024-07-24 16:37:22.555681] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.906 [2024-07-24 16:37:22.557960] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:25.906 [2024-07-24 16:37:22.558001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:25.906 [2024-07-24 16:37:22.558015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:25.906 [2024-07-24 16:37:22.558036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.906 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.164 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.164 "name": "Existed_Raid", 00:20:26.164 "uuid": "05281986-c982-4dac-a225-b5138e5f7634", 00:20:26.164 "strip_size_kb": 0, 00:20:26.164 "state": "configuring", 00:20:26.164 "raid_level": "raid1", 00:20:26.164 "superblock": true, 00:20:26.164 "num_base_bdevs": 3, 00:20:26.164 "num_base_bdevs_discovered": 1, 00:20:26.164 "num_base_bdevs_operational": 3, 00:20:26.164 "base_bdevs_list": [ 00:20:26.164 { 00:20:26.164 "name": "BaseBdev1", 00:20:26.164 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:26.164 "is_configured": true, 00:20:26.164 "data_offset": 2048, 00:20:26.164 "data_size": 63488 00:20:26.164 }, 00:20:26.164 { 00:20:26.164 "name": "BaseBdev2", 00:20:26.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.164 "is_configured": false, 00:20:26.164 "data_offset": 0, 00:20:26.164 "data_size": 0 00:20:26.164 }, 00:20:26.164 { 00:20:26.164 "name": "BaseBdev3", 00:20:26.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.164 "is_configured": false, 00:20:26.164 "data_offset": 0, 00:20:26.164 "data_size": 0 00:20:26.164 } 00:20:26.164 ] 00:20:26.164 }' 00:20:26.164 16:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.164 16:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.730 16:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:26.988 [2024-07-24 16:37:23.629415] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:26.988 BaseBdev2 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:26.988 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:27.247 16:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.247 [ 00:20:27.247 { 00:20:27.247 "name": "BaseBdev2", 00:20:27.247 "aliases": [ 00:20:27.247 "fe65be94-15b1-460b-998d-b00883106b7f" 00:20:27.247 ], 00:20:27.247 "product_name": "Malloc disk", 00:20:27.247 "block_size": 512, 00:20:27.247 "num_blocks": 65536, 00:20:27.247 "uuid": "fe65be94-15b1-460b-998d-b00883106b7f", 00:20:27.247 "assigned_rate_limits": { 00:20:27.247 "rw_ios_per_sec": 0, 00:20:27.247 "rw_mbytes_per_sec": 0, 00:20:27.247 "r_mbytes_per_sec": 0, 00:20:27.247 "w_mbytes_per_sec": 0 00:20:27.247 }, 00:20:27.247 "claimed": true, 00:20:27.247 "claim_type": "exclusive_write", 00:20:27.247 "zoned": false, 00:20:27.247 "supported_io_types": { 00:20:27.247 "read": true, 00:20:27.247 "write": true, 00:20:27.247 "unmap": true, 00:20:27.247 "flush": true, 00:20:27.247 "reset": true, 00:20:27.247 "nvme_admin": false, 00:20:27.247 "nvme_io": false, 00:20:27.247 "nvme_io_md": false, 00:20:27.247 "write_zeroes": true, 00:20:27.247 "zcopy": true, 00:20:27.247 "get_zone_info": false, 00:20:27.247 "zone_management": false, 00:20:27.247 "zone_append": false, 00:20:27.247 "compare": false, 00:20:27.247 "compare_and_write": false, 00:20:27.247 "abort": true, 00:20:27.247 "seek_hole": false, 00:20:27.247 "seek_data": false, 00:20:27.247 "copy": true, 00:20:27.247 "nvme_iov_md": false 00:20:27.247 }, 00:20:27.247 "memory_domains": [ 00:20:27.247 { 00:20:27.247 "dma_device_id": "system", 00:20:27.247 "dma_device_type": 1 00:20:27.247 }, 00:20:27.247 { 00:20:27.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.247 "dma_device_type": 2 00:20:27.247 } 00:20:27.247 ], 00:20:27.247 "driver_specific": {} 00:20:27.247 } 00:20:27.247 ] 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.247 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.505 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.505 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.505 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:27.505 "name": "Existed_Raid", 00:20:27.505 "uuid": "05281986-c982-4dac-a225-b5138e5f7634", 00:20:27.505 "strip_size_kb": 0, 00:20:27.505 "state": "configuring", 00:20:27.505 "raid_level": "raid1", 00:20:27.505 "superblock": true, 00:20:27.505 "num_base_bdevs": 3, 00:20:27.505 "num_base_bdevs_discovered": 2, 00:20:27.505 "num_base_bdevs_operational": 3, 00:20:27.505 "base_bdevs_list": [ 00:20:27.505 { 00:20:27.505 "name": "BaseBdev1", 00:20:27.505 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:27.505 "is_configured": true, 00:20:27.505 "data_offset": 2048, 00:20:27.505 "data_size": 63488 00:20:27.505 }, 00:20:27.505 { 00:20:27.505 "name": "BaseBdev2", 00:20:27.505 "uuid": "fe65be94-15b1-460b-998d-b00883106b7f", 00:20:27.505 "is_configured": true, 00:20:27.505 "data_offset": 2048, 00:20:27.505 "data_size": 63488 00:20:27.505 }, 00:20:27.505 { 00:20:27.505 "name": "BaseBdev3", 00:20:27.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.505 "is_configured": false, 00:20:27.505 "data_offset": 0, 00:20:27.505 "data_size": 0 00:20:27.505 } 00:20:27.505 ] 00:20:27.505 }' 00:20:27.505 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:27.505 16:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.106 16:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:28.364 [2024-07-24 16:37:25.168495] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:28.364 [2024-07-24 16:37:25.168751] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:20:28.364 [2024-07-24 16:37:25.168779] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:28.364 [2024-07-24 16:37:25.169089] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:28.364 [2024-07-24 16:37:25.169347] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:20:28.364 [2024-07-24 16:37:25.169363] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:20:28.364 BaseBdev3 00:20:28.364 [2024-07-24 16:37:25.169570] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:28.364 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.622 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:28.880 [ 00:20:28.880 { 00:20:28.880 "name": "BaseBdev3", 00:20:28.880 "aliases": [ 00:20:28.880 "f359bec8-3663-47da-a175-df0b7c5b15b5" 00:20:28.880 ], 00:20:28.880 "product_name": "Malloc disk", 00:20:28.880 "block_size": 512, 00:20:28.880 "num_blocks": 65536, 00:20:28.880 "uuid": "f359bec8-3663-47da-a175-df0b7c5b15b5", 00:20:28.880 "assigned_rate_limits": { 00:20:28.880 "rw_ios_per_sec": 0, 00:20:28.880 "rw_mbytes_per_sec": 0, 00:20:28.880 "r_mbytes_per_sec": 0, 00:20:28.880 "w_mbytes_per_sec": 0 00:20:28.880 }, 00:20:28.880 "claimed": true, 00:20:28.880 "claim_type": "exclusive_write", 00:20:28.880 "zoned": false, 00:20:28.880 "supported_io_types": { 00:20:28.880 "read": true, 00:20:28.880 "write": true, 00:20:28.880 "unmap": true, 00:20:28.880 "flush": true, 00:20:28.880 "reset": true, 00:20:28.880 "nvme_admin": false, 00:20:28.880 "nvme_io": false, 00:20:28.880 "nvme_io_md": false, 00:20:28.880 "write_zeroes": true, 00:20:28.880 "zcopy": true, 00:20:28.880 "get_zone_info": false, 00:20:28.880 "zone_management": false, 00:20:28.880 "zone_append": false, 00:20:28.880 "compare": false, 00:20:28.880 "compare_and_write": false, 00:20:28.880 "abort": true, 00:20:28.880 "seek_hole": false, 00:20:28.880 "seek_data": false, 00:20:28.880 "copy": true, 00:20:28.880 "nvme_iov_md": false 00:20:28.880 }, 00:20:28.880 "memory_domains": [ 00:20:28.880 { 00:20:28.880 "dma_device_id": "system", 00:20:28.880 "dma_device_type": 1 00:20:28.880 }, 00:20:28.880 { 00:20:28.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.880 "dma_device_type": 2 00:20:28.880 } 00:20:28.880 ], 00:20:28.880 "driver_specific": {} 00:20:28.880 } 00:20:28.880 ] 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.880 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.139 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.139 "name": "Existed_Raid", 00:20:29.139 "uuid": "05281986-c982-4dac-a225-b5138e5f7634", 00:20:29.139 "strip_size_kb": 0, 00:20:29.139 "state": "online", 00:20:29.139 "raid_level": "raid1", 00:20:29.139 "superblock": true, 00:20:29.139 "num_base_bdevs": 3, 00:20:29.139 "num_base_bdevs_discovered": 3, 00:20:29.139 "num_base_bdevs_operational": 3, 00:20:29.139 "base_bdevs_list": [ 00:20:29.139 { 00:20:29.139 "name": "BaseBdev1", 00:20:29.139 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:29.139 "is_configured": true, 00:20:29.139 "data_offset": 2048, 00:20:29.139 "data_size": 63488 00:20:29.139 }, 00:20:29.139 { 00:20:29.139 "name": "BaseBdev2", 00:20:29.139 "uuid": "fe65be94-15b1-460b-998d-b00883106b7f", 00:20:29.139 "is_configured": true, 00:20:29.139 "data_offset": 2048, 00:20:29.139 "data_size": 63488 00:20:29.139 }, 00:20:29.139 { 00:20:29.139 "name": "BaseBdev3", 00:20:29.139 "uuid": "f359bec8-3663-47da-a175-df0b7c5b15b5", 00:20:29.139 "is_configured": true, 00:20:29.139 "data_offset": 2048, 00:20:29.139 "data_size": 63488 00:20:29.139 } 00:20:29.139 ] 00:20:29.139 }' 00:20:29.139 16:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.139 16:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:29.705 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:29.964 [2024-07-24 16:37:26.641105] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.964 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:29.964 "name": "Existed_Raid", 00:20:29.964 "aliases": [ 00:20:29.964 "05281986-c982-4dac-a225-b5138e5f7634" 00:20:29.964 ], 00:20:29.964 "product_name": "Raid Volume", 00:20:29.964 "block_size": 512, 00:20:29.964 "num_blocks": 63488, 00:20:29.964 "uuid": "05281986-c982-4dac-a225-b5138e5f7634", 00:20:29.964 "assigned_rate_limits": { 00:20:29.964 "rw_ios_per_sec": 0, 00:20:29.964 "rw_mbytes_per_sec": 0, 00:20:29.964 "r_mbytes_per_sec": 0, 00:20:29.964 "w_mbytes_per_sec": 0 00:20:29.964 }, 00:20:29.964 "claimed": false, 00:20:29.964 "zoned": false, 00:20:29.964 "supported_io_types": { 00:20:29.964 "read": true, 00:20:29.964 "write": true, 00:20:29.964 "unmap": false, 00:20:29.964 "flush": false, 00:20:29.964 "reset": true, 00:20:29.964 "nvme_admin": false, 00:20:29.964 "nvme_io": false, 00:20:29.964 "nvme_io_md": false, 00:20:29.964 "write_zeroes": true, 00:20:29.964 "zcopy": false, 00:20:29.964 "get_zone_info": false, 00:20:29.964 "zone_management": false, 00:20:29.964 "zone_append": false, 00:20:29.964 "compare": false, 00:20:29.964 "compare_and_write": false, 00:20:29.964 "abort": false, 00:20:29.964 "seek_hole": false, 00:20:29.964 "seek_data": false, 00:20:29.964 "copy": false, 00:20:29.964 "nvme_iov_md": false 00:20:29.964 }, 00:20:29.964 "memory_domains": [ 00:20:29.964 { 00:20:29.964 "dma_device_id": "system", 00:20:29.964 "dma_device_type": 1 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.964 "dma_device_type": 2 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "dma_device_id": "system", 00:20:29.964 "dma_device_type": 1 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.964 "dma_device_type": 2 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "dma_device_id": "system", 00:20:29.964 "dma_device_type": 1 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.964 "dma_device_type": 2 00:20:29.964 } 00:20:29.964 ], 00:20:29.964 "driver_specific": { 00:20:29.964 "raid": { 00:20:29.964 "uuid": "05281986-c982-4dac-a225-b5138e5f7634", 00:20:29.964 "strip_size_kb": 0, 00:20:29.964 "state": "online", 00:20:29.964 "raid_level": "raid1", 00:20:29.964 "superblock": true, 00:20:29.964 "num_base_bdevs": 3, 00:20:29.964 "num_base_bdevs_discovered": 3, 00:20:29.964 "num_base_bdevs_operational": 3, 00:20:29.964 "base_bdevs_list": [ 00:20:29.964 { 00:20:29.964 "name": "BaseBdev1", 00:20:29.964 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:29.964 "is_configured": true, 00:20:29.964 "data_offset": 2048, 00:20:29.964 "data_size": 63488 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "name": "BaseBdev2", 00:20:29.964 "uuid": "fe65be94-15b1-460b-998d-b00883106b7f", 00:20:29.964 "is_configured": true, 00:20:29.964 "data_offset": 2048, 00:20:29.964 "data_size": 63488 00:20:29.964 }, 00:20:29.964 { 00:20:29.964 "name": "BaseBdev3", 00:20:29.964 "uuid": "f359bec8-3663-47da-a175-df0b7c5b15b5", 00:20:29.964 "is_configured": true, 00:20:29.964 "data_offset": 2048, 00:20:29.964 "data_size": 63488 00:20:29.964 } 00:20:29.964 ] 00:20:29.964 } 00:20:29.964 } 00:20:29.964 }' 00:20:29.964 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:29.964 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:29.964 BaseBdev2 00:20:29.964 BaseBdev3' 00:20:29.964 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:29.964 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:29.964 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:30.223 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:30.223 "name": "BaseBdev1", 00:20:30.223 "aliases": [ 00:20:30.223 "89c1906a-b439-41c1-aa5f-4b64aec0e17e" 00:20:30.223 ], 00:20:30.223 "product_name": "Malloc disk", 00:20:30.223 "block_size": 512, 00:20:30.223 "num_blocks": 65536, 00:20:30.223 "uuid": "89c1906a-b439-41c1-aa5f-4b64aec0e17e", 00:20:30.223 "assigned_rate_limits": { 00:20:30.223 "rw_ios_per_sec": 0, 00:20:30.223 "rw_mbytes_per_sec": 0, 00:20:30.223 "r_mbytes_per_sec": 0, 00:20:30.223 "w_mbytes_per_sec": 0 00:20:30.223 }, 00:20:30.223 "claimed": true, 00:20:30.223 "claim_type": "exclusive_write", 00:20:30.223 "zoned": false, 00:20:30.223 "supported_io_types": { 00:20:30.223 "read": true, 00:20:30.223 "write": true, 00:20:30.223 "unmap": true, 00:20:30.223 "flush": true, 00:20:30.223 "reset": true, 00:20:30.223 "nvme_admin": false, 00:20:30.223 "nvme_io": false, 00:20:30.223 "nvme_io_md": false, 00:20:30.223 "write_zeroes": true, 00:20:30.223 "zcopy": true, 00:20:30.223 "get_zone_info": false, 00:20:30.223 "zone_management": false, 00:20:30.223 "zone_append": false, 00:20:30.223 "compare": false, 00:20:30.223 "compare_and_write": false, 00:20:30.223 "abort": true, 00:20:30.223 "seek_hole": false, 00:20:30.223 "seek_data": false, 00:20:30.223 "copy": true, 00:20:30.223 "nvme_iov_md": false 00:20:30.223 }, 00:20:30.223 "memory_domains": [ 00:20:30.223 { 00:20:30.223 "dma_device_id": "system", 00:20:30.223 "dma_device_type": 1 00:20:30.223 }, 00:20:30.223 { 00:20:30.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.223 "dma_device_type": 2 00:20:30.223 } 00:20:30.223 ], 00:20:30.223 "driver_specific": {} 00:20:30.223 }' 00:20:30.223 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:30.223 16:37:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:30.223 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:30.223 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:30.223 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:30.482 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:30.740 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:30.740 "name": "BaseBdev2", 00:20:30.740 "aliases": [ 00:20:30.740 "fe65be94-15b1-460b-998d-b00883106b7f" 00:20:30.740 ], 00:20:30.740 "product_name": "Malloc disk", 00:20:30.740 "block_size": 512, 00:20:30.740 "num_blocks": 65536, 00:20:30.740 "uuid": "fe65be94-15b1-460b-998d-b00883106b7f", 00:20:30.740 "assigned_rate_limits": { 00:20:30.740 "rw_ios_per_sec": 0, 00:20:30.740 "rw_mbytes_per_sec": 0, 00:20:30.740 "r_mbytes_per_sec": 0, 00:20:30.740 "w_mbytes_per_sec": 0 00:20:30.740 }, 00:20:30.740 "claimed": true, 00:20:30.740 "claim_type": "exclusive_write", 00:20:30.740 "zoned": false, 00:20:30.740 "supported_io_types": { 00:20:30.740 "read": true, 00:20:30.740 "write": true, 00:20:30.740 "unmap": true, 00:20:30.740 "flush": true, 00:20:30.740 "reset": true, 00:20:30.740 "nvme_admin": false, 00:20:30.740 "nvme_io": false, 00:20:30.740 "nvme_io_md": false, 00:20:30.740 "write_zeroes": true, 00:20:30.740 "zcopy": true, 00:20:30.740 "get_zone_info": false, 00:20:30.740 "zone_management": false, 00:20:30.740 "zone_append": false, 00:20:30.740 "compare": false, 00:20:30.740 "compare_and_write": false, 00:20:30.740 "abort": true, 00:20:30.740 "seek_hole": false, 00:20:30.740 "seek_data": false, 00:20:30.740 "copy": true, 00:20:30.740 "nvme_iov_md": false 00:20:30.740 }, 00:20:30.740 "memory_domains": [ 00:20:30.740 { 00:20:30.740 "dma_device_id": "system", 00:20:30.740 "dma_device_type": 1 00:20:30.740 }, 00:20:30.740 { 00:20:30.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:30.740 "dma_device_type": 2 00:20:30.740 } 00:20:30.740 ], 00:20:30.740 "driver_specific": {} 00:20:30.740 }' 00:20:30.740 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:30.740 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:30.740 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:30.740 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:30.998 16:37:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:31.257 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:31.257 "name": "BaseBdev3", 00:20:31.257 "aliases": [ 00:20:31.257 "f359bec8-3663-47da-a175-df0b7c5b15b5" 00:20:31.257 ], 00:20:31.257 "product_name": "Malloc disk", 00:20:31.257 "block_size": 512, 00:20:31.257 "num_blocks": 65536, 00:20:31.257 "uuid": "f359bec8-3663-47da-a175-df0b7c5b15b5", 00:20:31.257 "assigned_rate_limits": { 00:20:31.257 "rw_ios_per_sec": 0, 00:20:31.257 "rw_mbytes_per_sec": 0, 00:20:31.257 "r_mbytes_per_sec": 0, 00:20:31.257 "w_mbytes_per_sec": 0 00:20:31.257 }, 00:20:31.257 "claimed": true, 00:20:31.257 "claim_type": "exclusive_write", 00:20:31.257 "zoned": false, 00:20:31.257 "supported_io_types": { 00:20:31.257 "read": true, 00:20:31.257 "write": true, 00:20:31.257 "unmap": true, 00:20:31.257 "flush": true, 00:20:31.257 "reset": true, 00:20:31.257 "nvme_admin": false, 00:20:31.257 "nvme_io": false, 00:20:31.257 "nvme_io_md": false, 00:20:31.257 "write_zeroes": true, 00:20:31.257 "zcopy": true, 00:20:31.257 "get_zone_info": false, 00:20:31.257 "zone_management": false, 00:20:31.257 "zone_append": false, 00:20:31.257 "compare": false, 00:20:31.257 "compare_and_write": false, 00:20:31.257 "abort": true, 00:20:31.257 "seek_hole": false, 00:20:31.257 "seek_data": false, 00:20:31.257 "copy": true, 00:20:31.257 "nvme_iov_md": false 00:20:31.257 }, 00:20:31.257 "memory_domains": [ 00:20:31.257 { 00:20:31.257 "dma_device_id": "system", 00:20:31.257 "dma_device_type": 1 00:20:31.257 }, 00:20:31.257 { 00:20:31.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.257 "dma_device_type": 2 00:20:31.257 } 00:20:31.257 ], 00:20:31.257 "driver_specific": {} 00:20:31.257 }' 00:20:31.257 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.257 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.515 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:31.774 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:31.774 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:31.774 [2024-07-24 16:37:28.590125] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.032 "name": "Existed_Raid", 00:20:32.032 "uuid": "05281986-c982-4dac-a225-b5138e5f7634", 00:20:32.032 "strip_size_kb": 0, 00:20:32.032 "state": "online", 00:20:32.032 "raid_level": "raid1", 00:20:32.032 "superblock": true, 00:20:32.032 "num_base_bdevs": 3, 00:20:32.032 "num_base_bdevs_discovered": 2, 00:20:32.032 "num_base_bdevs_operational": 2, 00:20:32.032 "base_bdevs_list": [ 00:20:32.032 { 00:20:32.032 "name": null, 00:20:32.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.032 "is_configured": false, 00:20:32.032 "data_offset": 2048, 00:20:32.032 "data_size": 63488 00:20:32.032 }, 00:20:32.032 { 00:20:32.032 "name": "BaseBdev2", 00:20:32.032 "uuid": "fe65be94-15b1-460b-998d-b00883106b7f", 00:20:32.032 "is_configured": true, 00:20:32.032 "data_offset": 2048, 00:20:32.032 "data_size": 63488 00:20:32.032 }, 00:20:32.032 { 00:20:32.032 "name": "BaseBdev3", 00:20:32.032 "uuid": "f359bec8-3663-47da-a175-df0b7c5b15b5", 00:20:32.032 "is_configured": true, 00:20:32.032 "data_offset": 2048, 00:20:32.032 "data_size": 63488 00:20:32.032 } 00:20:32.032 ] 00:20:32.032 }' 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.032 16:37:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.599 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:32.599 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:32.599 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.599 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:32.857 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:32.858 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:32.858 16:37:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:33.116 [2024-07-24 16:37:29.885352] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:33.374 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:33.374 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:33.374 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.374 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:33.632 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:33.632 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:33.632 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:33.632 [2024-07-24 16:37:30.477165] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:33.632 [2024-07-24 16:37:30.477275] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:33.891 [2024-07-24 16:37:30.613191] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:33.891 [2024-07-24 16:37:30.613249] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:33.891 [2024-07-24 16:37:30.613267] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:20:33.891 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:33.891 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:33.891 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.891 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:34.149 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:34.149 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:34.149 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:34.149 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:34.149 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:34.149 16:37:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:34.407 BaseBdev2 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:34.407 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.665 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:34.923 [ 00:20:34.923 { 00:20:34.923 "name": "BaseBdev2", 00:20:34.923 "aliases": [ 00:20:34.923 "089070ac-a653-43bb-a372-3b0ad907c4da" 00:20:34.923 ], 00:20:34.923 "product_name": "Malloc disk", 00:20:34.923 "block_size": 512, 00:20:34.923 "num_blocks": 65536, 00:20:34.923 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:34.923 "assigned_rate_limits": { 00:20:34.923 "rw_ios_per_sec": 0, 00:20:34.923 "rw_mbytes_per_sec": 0, 00:20:34.923 "r_mbytes_per_sec": 0, 00:20:34.923 "w_mbytes_per_sec": 0 00:20:34.923 }, 00:20:34.923 "claimed": false, 00:20:34.923 "zoned": false, 00:20:34.923 "supported_io_types": { 00:20:34.923 "read": true, 00:20:34.923 "write": true, 00:20:34.923 "unmap": true, 00:20:34.923 "flush": true, 00:20:34.923 "reset": true, 00:20:34.923 "nvme_admin": false, 00:20:34.923 "nvme_io": false, 00:20:34.923 "nvme_io_md": false, 00:20:34.923 "write_zeroes": true, 00:20:34.923 "zcopy": true, 00:20:34.923 "get_zone_info": false, 00:20:34.923 "zone_management": false, 00:20:34.923 "zone_append": false, 00:20:34.923 "compare": false, 00:20:34.923 "compare_and_write": false, 00:20:34.923 "abort": true, 00:20:34.923 "seek_hole": false, 00:20:34.923 "seek_data": false, 00:20:34.923 "copy": true, 00:20:34.923 "nvme_iov_md": false 00:20:34.923 }, 00:20:34.923 "memory_domains": [ 00:20:34.923 { 00:20:34.923 "dma_device_id": "system", 00:20:34.923 "dma_device_type": 1 00:20:34.923 }, 00:20:34.923 { 00:20:34.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.923 "dma_device_type": 2 00:20:34.923 } 00:20:34.923 ], 00:20:34.923 "driver_specific": {} 00:20:34.923 } 00:20:34.923 ] 00:20:34.923 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:34.923 16:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:34.923 16:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:34.923 16:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:35.181 BaseBdev3 00:20:35.181 16:37:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:35.181 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:35.182 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:35.182 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:35.182 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:35.182 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:35.182 16:37:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:35.440 16:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:35.440 [ 00:20:35.440 { 00:20:35.440 "name": "BaseBdev3", 00:20:35.440 "aliases": [ 00:20:35.440 "3e64932e-c457-49b3-ae02-bb6f10232a15" 00:20:35.440 ], 00:20:35.440 "product_name": "Malloc disk", 00:20:35.440 "block_size": 512, 00:20:35.440 "num_blocks": 65536, 00:20:35.440 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:35.440 "assigned_rate_limits": { 00:20:35.440 "rw_ios_per_sec": 0, 00:20:35.440 "rw_mbytes_per_sec": 0, 00:20:35.440 "r_mbytes_per_sec": 0, 00:20:35.440 "w_mbytes_per_sec": 0 00:20:35.440 }, 00:20:35.440 "claimed": false, 00:20:35.440 "zoned": false, 00:20:35.440 "supported_io_types": { 00:20:35.440 "read": true, 00:20:35.440 "write": true, 00:20:35.440 "unmap": true, 00:20:35.440 "flush": true, 00:20:35.440 "reset": true, 00:20:35.440 "nvme_admin": false, 00:20:35.440 "nvme_io": false, 00:20:35.440 "nvme_io_md": false, 00:20:35.440 "write_zeroes": true, 00:20:35.440 "zcopy": true, 00:20:35.440 "get_zone_info": false, 00:20:35.440 "zone_management": false, 00:20:35.440 "zone_append": false, 00:20:35.440 "compare": false, 00:20:35.440 "compare_and_write": false, 00:20:35.440 "abort": true, 00:20:35.440 "seek_hole": false, 00:20:35.440 "seek_data": false, 00:20:35.440 "copy": true, 00:20:35.440 "nvme_iov_md": false 00:20:35.440 }, 00:20:35.440 "memory_domains": [ 00:20:35.440 { 00:20:35.440 "dma_device_id": "system", 00:20:35.440 "dma_device_type": 1 00:20:35.440 }, 00:20:35.440 { 00:20:35.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.440 "dma_device_type": 2 00:20:35.440 } 00:20:35.440 ], 00:20:35.440 "driver_specific": {} 00:20:35.440 } 00:20:35.440 ] 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:35.698 [2024-07-24 16:37:32.519104] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:35.698 [2024-07-24 16:37:32.519165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:35.698 [2024-07-24 16:37:32.519194] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:35.698 [2024-07-24 16:37:32.521528] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.698 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:35.957 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.957 "name": "Existed_Raid", 00:20:35.957 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:35.957 "strip_size_kb": 0, 00:20:35.957 "state": "configuring", 00:20:35.957 "raid_level": "raid1", 00:20:35.957 "superblock": true, 00:20:35.957 "num_base_bdevs": 3, 00:20:35.957 "num_base_bdevs_discovered": 2, 00:20:35.957 "num_base_bdevs_operational": 3, 00:20:35.957 "base_bdevs_list": [ 00:20:35.957 { 00:20:35.957 "name": "BaseBdev1", 00:20:35.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.957 "is_configured": false, 00:20:35.957 "data_offset": 0, 00:20:35.957 "data_size": 0 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "name": "BaseBdev2", 00:20:35.957 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:35.957 "is_configured": true, 00:20:35.957 "data_offset": 2048, 00:20:35.957 "data_size": 63488 00:20:35.957 }, 00:20:35.957 { 00:20:35.957 "name": "BaseBdev3", 00:20:35.957 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:35.957 "is_configured": true, 00:20:35.957 "data_offset": 2048, 00:20:35.957 "data_size": 63488 00:20:35.957 } 00:20:35.957 ] 00:20:35.957 }' 00:20:35.957 16:37:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.957 16:37:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:36.525 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:36.783 [2024-07-24 16:37:33.537825] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.783 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.042 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.042 "name": "Existed_Raid", 00:20:37.042 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:37.042 "strip_size_kb": 0, 00:20:37.042 "state": "configuring", 00:20:37.042 "raid_level": "raid1", 00:20:37.042 "superblock": true, 00:20:37.042 "num_base_bdevs": 3, 00:20:37.042 "num_base_bdevs_discovered": 1, 00:20:37.042 "num_base_bdevs_operational": 3, 00:20:37.042 "base_bdevs_list": [ 00:20:37.042 { 00:20:37.042 "name": "BaseBdev1", 00:20:37.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.042 "is_configured": false, 00:20:37.042 "data_offset": 0, 00:20:37.042 "data_size": 0 00:20:37.042 }, 00:20:37.042 { 00:20:37.042 "name": null, 00:20:37.042 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:37.042 "is_configured": false, 00:20:37.042 "data_offset": 2048, 00:20:37.042 "data_size": 63488 00:20:37.042 }, 00:20:37.042 { 00:20:37.042 "name": "BaseBdev3", 00:20:37.042 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:37.042 "is_configured": true, 00:20:37.042 "data_offset": 2048, 00:20:37.042 "data_size": 63488 00:20:37.042 } 00:20:37.042 ] 00:20:37.042 }' 00:20:37.042 16:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.042 16:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:37.609 16:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:37.609 16:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.868 16:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:37.868 16:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:38.127 [2024-07-24 16:37:34.834051] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:38.127 BaseBdev1 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:38.127 16:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:38.386 16:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:38.645 [ 00:20:38.645 { 00:20:38.645 "name": "BaseBdev1", 00:20:38.645 "aliases": [ 00:20:38.645 "765b5e79-e915-4cab-abce-59978587f1aa" 00:20:38.645 ], 00:20:38.645 "product_name": "Malloc disk", 00:20:38.645 "block_size": 512, 00:20:38.645 "num_blocks": 65536, 00:20:38.645 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:38.645 "assigned_rate_limits": { 00:20:38.645 "rw_ios_per_sec": 0, 00:20:38.645 "rw_mbytes_per_sec": 0, 00:20:38.645 "r_mbytes_per_sec": 0, 00:20:38.645 "w_mbytes_per_sec": 0 00:20:38.645 }, 00:20:38.645 "claimed": true, 00:20:38.645 "claim_type": "exclusive_write", 00:20:38.645 "zoned": false, 00:20:38.645 "supported_io_types": { 00:20:38.645 "read": true, 00:20:38.645 "write": true, 00:20:38.645 "unmap": true, 00:20:38.645 "flush": true, 00:20:38.645 "reset": true, 00:20:38.645 "nvme_admin": false, 00:20:38.645 "nvme_io": false, 00:20:38.645 "nvme_io_md": false, 00:20:38.645 "write_zeroes": true, 00:20:38.645 "zcopy": true, 00:20:38.645 "get_zone_info": false, 00:20:38.645 "zone_management": false, 00:20:38.645 "zone_append": false, 00:20:38.645 "compare": false, 00:20:38.645 "compare_and_write": false, 00:20:38.645 "abort": true, 00:20:38.645 "seek_hole": false, 00:20:38.645 "seek_data": false, 00:20:38.645 "copy": true, 00:20:38.645 "nvme_iov_md": false 00:20:38.645 }, 00:20:38.645 "memory_domains": [ 00:20:38.645 { 00:20:38.645 "dma_device_id": "system", 00:20:38.645 "dma_device_type": 1 00:20:38.645 }, 00:20:38.645 { 00:20:38.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.645 "dma_device_type": 2 00:20:38.645 } 00:20:38.645 ], 00:20:38.645 "driver_specific": {} 00:20:38.645 } 00:20:38.645 ] 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.645 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:38.904 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.904 "name": "Existed_Raid", 00:20:38.904 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:38.904 "strip_size_kb": 0, 00:20:38.904 "state": "configuring", 00:20:38.904 "raid_level": "raid1", 00:20:38.904 "superblock": true, 00:20:38.904 "num_base_bdevs": 3, 00:20:38.904 "num_base_bdevs_discovered": 2, 00:20:38.904 "num_base_bdevs_operational": 3, 00:20:38.904 "base_bdevs_list": [ 00:20:38.904 { 00:20:38.904 "name": "BaseBdev1", 00:20:38.904 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:38.904 "is_configured": true, 00:20:38.904 "data_offset": 2048, 00:20:38.904 "data_size": 63488 00:20:38.904 }, 00:20:38.904 { 00:20:38.904 "name": null, 00:20:38.904 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:38.904 "is_configured": false, 00:20:38.904 "data_offset": 2048, 00:20:38.904 "data_size": 63488 00:20:38.904 }, 00:20:38.904 { 00:20:38.904 "name": "BaseBdev3", 00:20:38.904 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:38.904 "is_configured": true, 00:20:38.904 "data_offset": 2048, 00:20:38.904 "data_size": 63488 00:20:38.904 } 00:20:38.904 ] 00:20:38.904 }' 00:20:38.905 16:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.905 16:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.523 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.523 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:39.523 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:39.523 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:39.785 [2024-07-24 16:37:36.566780] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.785 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.045 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.045 "name": "Existed_Raid", 00:20:40.045 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:40.045 "strip_size_kb": 0, 00:20:40.045 "state": "configuring", 00:20:40.045 "raid_level": "raid1", 00:20:40.045 "superblock": true, 00:20:40.045 "num_base_bdevs": 3, 00:20:40.045 "num_base_bdevs_discovered": 1, 00:20:40.045 "num_base_bdevs_operational": 3, 00:20:40.045 "base_bdevs_list": [ 00:20:40.045 { 00:20:40.045 "name": "BaseBdev1", 00:20:40.045 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:40.045 "is_configured": true, 00:20:40.045 "data_offset": 2048, 00:20:40.045 "data_size": 63488 00:20:40.045 }, 00:20:40.045 { 00:20:40.045 "name": null, 00:20:40.045 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:40.045 "is_configured": false, 00:20:40.045 "data_offset": 2048, 00:20:40.045 "data_size": 63488 00:20:40.045 }, 00:20:40.045 { 00:20:40.045 "name": null, 00:20:40.045 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:40.045 "is_configured": false, 00:20:40.045 "data_offset": 2048, 00:20:40.045 "data_size": 63488 00:20:40.045 } 00:20:40.045 ] 00:20:40.045 }' 00:20:40.045 16:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.045 16:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:40.613 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.613 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:40.872 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:40.872 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:41.131 [2024-07-24 16:37:37.822370] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.131 16:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.390 16:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.390 "name": "Existed_Raid", 00:20:41.390 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:41.390 "strip_size_kb": 0, 00:20:41.390 "state": "configuring", 00:20:41.390 "raid_level": "raid1", 00:20:41.390 "superblock": true, 00:20:41.390 "num_base_bdevs": 3, 00:20:41.390 "num_base_bdevs_discovered": 2, 00:20:41.390 "num_base_bdevs_operational": 3, 00:20:41.390 "base_bdevs_list": [ 00:20:41.390 { 00:20:41.390 "name": "BaseBdev1", 00:20:41.390 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:41.390 "is_configured": true, 00:20:41.390 "data_offset": 2048, 00:20:41.390 "data_size": 63488 00:20:41.390 }, 00:20:41.390 { 00:20:41.390 "name": null, 00:20:41.390 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:41.390 "is_configured": false, 00:20:41.390 "data_offset": 2048, 00:20:41.390 "data_size": 63488 00:20:41.390 }, 00:20:41.390 { 00:20:41.390 "name": "BaseBdev3", 00:20:41.390 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:41.390 "is_configured": true, 00:20:41.390 "data_offset": 2048, 00:20:41.390 "data_size": 63488 00:20:41.390 } 00:20:41.390 ] 00:20:41.390 }' 00:20:41.390 16:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.390 16:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.026 16:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.026 16:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:42.026 16:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:42.026 16:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:42.285 [2024-07-24 16:37:39.077781] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.544 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.803 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.803 "name": "Existed_Raid", 00:20:42.803 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:42.803 "strip_size_kb": 0, 00:20:42.803 "state": "configuring", 00:20:42.803 "raid_level": "raid1", 00:20:42.803 "superblock": true, 00:20:42.803 "num_base_bdevs": 3, 00:20:42.803 "num_base_bdevs_discovered": 1, 00:20:42.803 "num_base_bdevs_operational": 3, 00:20:42.803 "base_bdevs_list": [ 00:20:42.803 { 00:20:42.803 "name": null, 00:20:42.803 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:42.803 "is_configured": false, 00:20:42.803 "data_offset": 2048, 00:20:42.803 "data_size": 63488 00:20:42.803 }, 00:20:42.803 { 00:20:42.803 "name": null, 00:20:42.804 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:42.804 "is_configured": false, 00:20:42.804 "data_offset": 2048, 00:20:42.804 "data_size": 63488 00:20:42.804 }, 00:20:42.804 { 00:20:42.804 "name": "BaseBdev3", 00:20:42.804 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:42.804 "is_configured": true, 00:20:42.804 "data_offset": 2048, 00:20:42.804 "data_size": 63488 00:20:42.804 } 00:20:42.804 ] 00:20:42.804 }' 00:20:42.804 16:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.804 16:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.372 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.372 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:43.631 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:43.631 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:43.631 [2024-07-24 16:37:40.461650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.631 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.632 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.890 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.891 "name": "Existed_Raid", 00:20:43.891 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:43.891 "strip_size_kb": 0, 00:20:43.891 "state": "configuring", 00:20:43.891 "raid_level": "raid1", 00:20:43.891 "superblock": true, 00:20:43.891 "num_base_bdevs": 3, 00:20:43.891 "num_base_bdevs_discovered": 2, 00:20:43.891 "num_base_bdevs_operational": 3, 00:20:43.891 "base_bdevs_list": [ 00:20:43.891 { 00:20:43.891 "name": null, 00:20:43.891 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:43.891 "is_configured": false, 00:20:43.891 "data_offset": 2048, 00:20:43.891 "data_size": 63488 00:20:43.891 }, 00:20:43.891 { 00:20:43.891 "name": "BaseBdev2", 00:20:43.891 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:43.891 "is_configured": true, 00:20:43.891 "data_offset": 2048, 00:20:43.891 "data_size": 63488 00:20:43.891 }, 00:20:43.891 { 00:20:43.891 "name": "BaseBdev3", 00:20:43.891 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:43.891 "is_configured": true, 00:20:43.891 "data_offset": 2048, 00:20:43.891 "data_size": 63488 00:20:43.891 } 00:20:43.891 ] 00:20:43.891 }' 00:20:43.891 16:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.891 16:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.459 16:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.459 16:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:44.717 16:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:44.717 16:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.717 16:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:44.976 16:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 765b5e79-e915-4cab-abce-59978587f1aa 00:20:45.235 [2024-07-24 16:37:41.989938] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:45.235 [2024-07-24 16:37:41.990182] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041780 00:20:45.235 [2024-07-24 16:37:41.990202] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:45.235 [2024-07-24 16:37:41.990512] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:20:45.235 [2024-07-24 16:37:41.990727] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041780 00:20:45.235 [2024-07-24 16:37:41.990747] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000041780 00:20:45.235 [2024-07-24 16:37:41.990924] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.235 NewBaseBdev 00:20:45.235 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:45.235 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:45.235 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:45.236 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:45.236 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:45.236 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:45.236 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:45.494 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:45.754 [ 00:20:45.754 { 00:20:45.754 "name": "NewBaseBdev", 00:20:45.754 "aliases": [ 00:20:45.754 "765b5e79-e915-4cab-abce-59978587f1aa" 00:20:45.754 ], 00:20:45.754 "product_name": "Malloc disk", 00:20:45.754 "block_size": 512, 00:20:45.754 "num_blocks": 65536, 00:20:45.754 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:45.754 "assigned_rate_limits": { 00:20:45.754 "rw_ios_per_sec": 0, 00:20:45.754 "rw_mbytes_per_sec": 0, 00:20:45.754 "r_mbytes_per_sec": 0, 00:20:45.754 "w_mbytes_per_sec": 0 00:20:45.754 }, 00:20:45.754 "claimed": true, 00:20:45.754 "claim_type": "exclusive_write", 00:20:45.754 "zoned": false, 00:20:45.754 "supported_io_types": { 00:20:45.754 "read": true, 00:20:45.754 "write": true, 00:20:45.754 "unmap": true, 00:20:45.754 "flush": true, 00:20:45.754 "reset": true, 00:20:45.754 "nvme_admin": false, 00:20:45.754 "nvme_io": false, 00:20:45.754 "nvme_io_md": false, 00:20:45.754 "write_zeroes": true, 00:20:45.754 "zcopy": true, 00:20:45.754 "get_zone_info": false, 00:20:45.754 "zone_management": false, 00:20:45.754 "zone_append": false, 00:20:45.754 "compare": false, 00:20:45.754 "compare_and_write": false, 00:20:45.754 "abort": true, 00:20:45.754 "seek_hole": false, 00:20:45.754 "seek_data": false, 00:20:45.754 "copy": true, 00:20:45.754 "nvme_iov_md": false 00:20:45.754 }, 00:20:45.754 "memory_domains": [ 00:20:45.754 { 00:20:45.754 "dma_device_id": "system", 00:20:45.754 "dma_device_type": 1 00:20:45.754 }, 00:20:45.754 { 00:20:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.754 "dma_device_type": 2 00:20:45.754 } 00:20:45.754 ], 00:20:45.754 "driver_specific": {} 00:20:45.754 } 00:20:45.754 ] 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.754 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.014 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.014 "name": "Existed_Raid", 00:20:46.014 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:46.014 "strip_size_kb": 0, 00:20:46.014 "state": "online", 00:20:46.014 "raid_level": "raid1", 00:20:46.014 "superblock": true, 00:20:46.014 "num_base_bdevs": 3, 00:20:46.014 "num_base_bdevs_discovered": 3, 00:20:46.014 "num_base_bdevs_operational": 3, 00:20:46.014 "base_bdevs_list": [ 00:20:46.014 { 00:20:46.014 "name": "NewBaseBdev", 00:20:46.014 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:46.014 "is_configured": true, 00:20:46.014 "data_offset": 2048, 00:20:46.014 "data_size": 63488 00:20:46.014 }, 00:20:46.014 { 00:20:46.014 "name": "BaseBdev2", 00:20:46.014 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:46.014 "is_configured": true, 00:20:46.014 "data_offset": 2048, 00:20:46.014 "data_size": 63488 00:20:46.014 }, 00:20:46.014 { 00:20:46.014 "name": "BaseBdev3", 00:20:46.014 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:46.014 "is_configured": true, 00:20:46.014 "data_offset": 2048, 00:20:46.014 "data_size": 63488 00:20:46.014 } 00:20:46.014 ] 00:20:46.014 }' 00:20:46.014 16:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.014 16:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:46.585 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:46.844 [2024-07-24 16:37:43.470410] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.844 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:46.844 "name": "Existed_Raid", 00:20:46.844 "aliases": [ 00:20:46.844 "fca37a72-c558-4262-891c-977309b74e55" 00:20:46.844 ], 00:20:46.844 "product_name": "Raid Volume", 00:20:46.844 "block_size": 512, 00:20:46.844 "num_blocks": 63488, 00:20:46.844 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:46.844 "assigned_rate_limits": { 00:20:46.844 "rw_ios_per_sec": 0, 00:20:46.844 "rw_mbytes_per_sec": 0, 00:20:46.844 "r_mbytes_per_sec": 0, 00:20:46.844 "w_mbytes_per_sec": 0 00:20:46.844 }, 00:20:46.844 "claimed": false, 00:20:46.844 "zoned": false, 00:20:46.844 "supported_io_types": { 00:20:46.844 "read": true, 00:20:46.844 "write": true, 00:20:46.844 "unmap": false, 00:20:46.844 "flush": false, 00:20:46.844 "reset": true, 00:20:46.844 "nvme_admin": false, 00:20:46.844 "nvme_io": false, 00:20:46.844 "nvme_io_md": false, 00:20:46.844 "write_zeroes": true, 00:20:46.844 "zcopy": false, 00:20:46.844 "get_zone_info": false, 00:20:46.844 "zone_management": false, 00:20:46.844 "zone_append": false, 00:20:46.844 "compare": false, 00:20:46.844 "compare_and_write": false, 00:20:46.844 "abort": false, 00:20:46.844 "seek_hole": false, 00:20:46.844 "seek_data": false, 00:20:46.844 "copy": false, 00:20:46.844 "nvme_iov_md": false 00:20:46.844 }, 00:20:46.844 "memory_domains": [ 00:20:46.844 { 00:20:46.844 "dma_device_id": "system", 00:20:46.844 "dma_device_type": 1 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.844 "dma_device_type": 2 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "dma_device_id": "system", 00:20:46.844 "dma_device_type": 1 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.844 "dma_device_type": 2 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "dma_device_id": "system", 00:20:46.844 "dma_device_type": 1 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.844 "dma_device_type": 2 00:20:46.844 } 00:20:46.844 ], 00:20:46.844 "driver_specific": { 00:20:46.844 "raid": { 00:20:46.844 "uuid": "fca37a72-c558-4262-891c-977309b74e55", 00:20:46.844 "strip_size_kb": 0, 00:20:46.844 "state": "online", 00:20:46.844 "raid_level": "raid1", 00:20:46.844 "superblock": true, 00:20:46.844 "num_base_bdevs": 3, 00:20:46.844 "num_base_bdevs_discovered": 3, 00:20:46.844 "num_base_bdevs_operational": 3, 00:20:46.844 "base_bdevs_list": [ 00:20:46.844 { 00:20:46.844 "name": "NewBaseBdev", 00:20:46.844 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:46.844 "is_configured": true, 00:20:46.844 "data_offset": 2048, 00:20:46.844 "data_size": 63488 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "name": "BaseBdev2", 00:20:46.844 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:46.844 "is_configured": true, 00:20:46.844 "data_offset": 2048, 00:20:46.844 "data_size": 63488 00:20:46.844 }, 00:20:46.844 { 00:20:46.844 "name": "BaseBdev3", 00:20:46.844 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:46.844 "is_configured": true, 00:20:46.844 "data_offset": 2048, 00:20:46.844 "data_size": 63488 00:20:46.844 } 00:20:46.844 ] 00:20:46.844 } 00:20:46.844 } 00:20:46.844 }' 00:20:46.844 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:46.844 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:46.844 BaseBdev2 00:20:46.844 BaseBdev3' 00:20:46.844 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:46.844 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:46.844 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:47.103 "name": "NewBaseBdev", 00:20:47.103 "aliases": [ 00:20:47.103 "765b5e79-e915-4cab-abce-59978587f1aa" 00:20:47.103 ], 00:20:47.103 "product_name": "Malloc disk", 00:20:47.103 "block_size": 512, 00:20:47.103 "num_blocks": 65536, 00:20:47.103 "uuid": "765b5e79-e915-4cab-abce-59978587f1aa", 00:20:47.103 "assigned_rate_limits": { 00:20:47.103 "rw_ios_per_sec": 0, 00:20:47.103 "rw_mbytes_per_sec": 0, 00:20:47.103 "r_mbytes_per_sec": 0, 00:20:47.103 "w_mbytes_per_sec": 0 00:20:47.103 }, 00:20:47.103 "claimed": true, 00:20:47.103 "claim_type": "exclusive_write", 00:20:47.103 "zoned": false, 00:20:47.103 "supported_io_types": { 00:20:47.103 "read": true, 00:20:47.103 "write": true, 00:20:47.103 "unmap": true, 00:20:47.103 "flush": true, 00:20:47.103 "reset": true, 00:20:47.103 "nvme_admin": false, 00:20:47.103 "nvme_io": false, 00:20:47.103 "nvme_io_md": false, 00:20:47.103 "write_zeroes": true, 00:20:47.103 "zcopy": true, 00:20:47.103 "get_zone_info": false, 00:20:47.103 "zone_management": false, 00:20:47.103 "zone_append": false, 00:20:47.103 "compare": false, 00:20:47.103 "compare_and_write": false, 00:20:47.103 "abort": true, 00:20:47.103 "seek_hole": false, 00:20:47.103 "seek_data": false, 00:20:47.103 "copy": true, 00:20:47.103 "nvme_iov_md": false 00:20:47.103 }, 00:20:47.103 "memory_domains": [ 00:20:47.103 { 00:20:47.103 "dma_device_id": "system", 00:20:47.103 "dma_device_type": 1 00:20:47.103 }, 00:20:47.103 { 00:20:47.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.103 "dma_device_type": 2 00:20:47.103 } 00:20:47.103 ], 00:20:47.103 "driver_specific": {} 00:20:47.103 }' 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:47.103 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.362 16:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:47.362 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:47.621 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:47.621 "name": "BaseBdev2", 00:20:47.621 "aliases": [ 00:20:47.621 "089070ac-a653-43bb-a372-3b0ad907c4da" 00:20:47.621 ], 00:20:47.621 "product_name": "Malloc disk", 00:20:47.621 "block_size": 512, 00:20:47.621 "num_blocks": 65536, 00:20:47.621 "uuid": "089070ac-a653-43bb-a372-3b0ad907c4da", 00:20:47.621 "assigned_rate_limits": { 00:20:47.621 "rw_ios_per_sec": 0, 00:20:47.621 "rw_mbytes_per_sec": 0, 00:20:47.621 "r_mbytes_per_sec": 0, 00:20:47.621 "w_mbytes_per_sec": 0 00:20:47.621 }, 00:20:47.621 "claimed": true, 00:20:47.621 "claim_type": "exclusive_write", 00:20:47.621 "zoned": false, 00:20:47.621 "supported_io_types": { 00:20:47.621 "read": true, 00:20:47.621 "write": true, 00:20:47.621 "unmap": true, 00:20:47.621 "flush": true, 00:20:47.621 "reset": true, 00:20:47.621 "nvme_admin": false, 00:20:47.621 "nvme_io": false, 00:20:47.621 "nvme_io_md": false, 00:20:47.621 "write_zeroes": true, 00:20:47.621 "zcopy": true, 00:20:47.621 "get_zone_info": false, 00:20:47.621 "zone_management": false, 00:20:47.621 "zone_append": false, 00:20:47.621 "compare": false, 00:20:47.621 "compare_and_write": false, 00:20:47.621 "abort": true, 00:20:47.621 "seek_hole": false, 00:20:47.621 "seek_data": false, 00:20:47.621 "copy": true, 00:20:47.621 "nvme_iov_md": false 00:20:47.621 }, 00:20:47.621 "memory_domains": [ 00:20:47.621 { 00:20:47.621 "dma_device_id": "system", 00:20:47.621 "dma_device_type": 1 00:20:47.621 }, 00:20:47.621 { 00:20:47.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.621 "dma_device_type": 2 00:20:47.621 } 00:20:47.621 ], 00:20:47.621 "driver_specific": {} 00:20:47.621 }' 00:20:47.621 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.621 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:47.621 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:47.621 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.621 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:47.880 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:48.138 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.138 "name": "BaseBdev3", 00:20:48.138 "aliases": [ 00:20:48.138 "3e64932e-c457-49b3-ae02-bb6f10232a15" 00:20:48.138 ], 00:20:48.138 "product_name": "Malloc disk", 00:20:48.138 "block_size": 512, 00:20:48.138 "num_blocks": 65536, 00:20:48.138 "uuid": "3e64932e-c457-49b3-ae02-bb6f10232a15", 00:20:48.138 "assigned_rate_limits": { 00:20:48.138 "rw_ios_per_sec": 0, 00:20:48.138 "rw_mbytes_per_sec": 0, 00:20:48.138 "r_mbytes_per_sec": 0, 00:20:48.138 "w_mbytes_per_sec": 0 00:20:48.138 }, 00:20:48.138 "claimed": true, 00:20:48.138 "claim_type": "exclusive_write", 00:20:48.138 "zoned": false, 00:20:48.138 "supported_io_types": { 00:20:48.138 "read": true, 00:20:48.138 "write": true, 00:20:48.138 "unmap": true, 00:20:48.138 "flush": true, 00:20:48.138 "reset": true, 00:20:48.138 "nvme_admin": false, 00:20:48.138 "nvme_io": false, 00:20:48.138 "nvme_io_md": false, 00:20:48.138 "write_zeroes": true, 00:20:48.138 "zcopy": true, 00:20:48.138 "get_zone_info": false, 00:20:48.138 "zone_management": false, 00:20:48.138 "zone_append": false, 00:20:48.138 "compare": false, 00:20:48.138 "compare_and_write": false, 00:20:48.138 "abort": true, 00:20:48.138 "seek_hole": false, 00:20:48.138 "seek_data": false, 00:20:48.138 "copy": true, 00:20:48.138 "nvme_iov_md": false 00:20:48.138 }, 00:20:48.138 "memory_domains": [ 00:20:48.138 { 00:20:48.138 "dma_device_id": "system", 00:20:48.138 "dma_device_type": 1 00:20:48.139 }, 00:20:48.139 { 00:20:48.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.139 "dma_device_type": 2 00:20:48.139 } 00:20:48.139 ], 00:20:48.139 "driver_specific": {} 00:20:48.139 }' 00:20:48.139 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.139 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.139 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.139 16:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.397 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:48.656 [2024-07-24 16:37:45.411297] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:48.656 [2024-07-24 16:37:45.411332] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.656 [2024-07-24 16:37:45.411415] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.656 [2024-07-24 16:37:45.411751] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.656 [2024-07-24 16:37:45.411773] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041780 name Existed_Raid, state offline 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1665784 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1665784 ']' 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1665784 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1665784 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1665784' 00:20:48.656 killing process with pid 1665784 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1665784 00:20:48.656 [2024-07-24 16:37:45.489873] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.656 16:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1665784 00:20:49.224 [2024-07-24 16:37:45.820206] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:51.136 16:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:51.136 00:20:51.136 real 0m29.565s 00:20:51.136 user 0m51.638s 00:20:51.136 sys 0m5.140s 00:20:51.136 16:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:51.136 16:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.136 ************************************ 00:20:51.136 END TEST raid_state_function_test_sb 00:20:51.136 ************************************ 00:20:51.136 16:37:47 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:51.136 16:37:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:51.136 16:37:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:51.136 16:37:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:51.136 ************************************ 00:20:51.136 START TEST raid_superblock_test 00:20:51.136 ************************************ 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1671390 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1671390 /var/tmp/spdk-raid.sock 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1671390 ']' 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:51.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.136 16:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.136 [2024-07-24 16:37:47.733472] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:20:51.136 [2024-07-24 16:37:47.733591] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1671390 ] 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:51.136 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:51.136 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:51.136 [2024-07-24 16:37:47.958051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.396 [2024-07-24 16:37:48.240468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.964 [2024-07-24 16:37:48.592731] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.964 [2024-07-24 16:37:48.592764] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:51.964 16:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:52.223 malloc1 00:20:52.223 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:52.482 [2024-07-24 16:37:49.276569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:52.482 [2024-07-24 16:37:49.276632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.482 [2024-07-24 16:37:49.276662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:20:52.482 [2024-07-24 16:37:49.276679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.482 [2024-07-24 16:37:49.279443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.482 [2024-07-24 16:37:49.279478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:52.482 pt1 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:52.482 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:52.741 malloc2 00:20:52.741 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:53.000 [2024-07-24 16:37:49.782596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:53.000 [2024-07-24 16:37:49.782657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.000 [2024-07-24 16:37:49.782686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:20:53.000 [2024-07-24 16:37:49.782701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.000 [2024-07-24 16:37:49.785480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.000 [2024-07-24 16:37:49.785520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:53.000 pt2 00:20:53.000 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:53.000 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:53.000 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:20:53.000 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:20:53.001 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:53.001 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:53.001 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:53.001 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:53.001 16:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:53.260 malloc3 00:20:53.260 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:53.519 [2024-07-24 16:37:50.295594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:53.519 [2024-07-24 16:37:50.295662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:53.519 [2024-07-24 16:37:50.295694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:20:53.519 [2024-07-24 16:37:50.295710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:53.519 [2024-07-24 16:37:50.298483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:53.519 [2024-07-24 16:37:50.298517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:53.519 pt3 00:20:53.519 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:53.519 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:53.519 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:53.778 [2024-07-24 16:37:50.520243] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:53.778 [2024-07-24 16:37:50.522567] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:53.778 [2024-07-24 16:37:50.522650] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:53.778 [2024-07-24 16:37:50.522888] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041480 00:20:53.778 [2024-07-24 16:37:50.522914] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:53.778 [2024-07-24 16:37:50.523281] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:53.778 [2024-07-24 16:37:50.523552] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041480 00:20:53.778 [2024-07-24 16:37:50.523568] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041480 00:20:53.778 [2024-07-24 16:37:50.523773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.778 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.038 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.038 "name": "raid_bdev1", 00:20:54.038 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:20:54.038 "strip_size_kb": 0, 00:20:54.038 "state": "online", 00:20:54.038 "raid_level": "raid1", 00:20:54.038 "superblock": true, 00:20:54.038 "num_base_bdevs": 3, 00:20:54.038 "num_base_bdevs_discovered": 3, 00:20:54.038 "num_base_bdevs_operational": 3, 00:20:54.038 "base_bdevs_list": [ 00:20:54.038 { 00:20:54.038 "name": "pt1", 00:20:54.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.038 "is_configured": true, 00:20:54.038 "data_offset": 2048, 00:20:54.038 "data_size": 63488 00:20:54.038 }, 00:20:54.038 { 00:20:54.038 "name": "pt2", 00:20:54.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.038 "is_configured": true, 00:20:54.038 "data_offset": 2048, 00:20:54.038 "data_size": 63488 00:20:54.038 }, 00:20:54.038 { 00:20:54.038 "name": "pt3", 00:20:54.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.038 "is_configured": true, 00:20:54.038 "data_offset": 2048, 00:20:54.038 "data_size": 63488 00:20:54.038 } 00:20:54.038 ] 00:20:54.038 }' 00:20:54.038 16:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.038 16:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:54.606 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:54.866 [2024-07-24 16:37:51.547327] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.866 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:54.866 "name": "raid_bdev1", 00:20:54.866 "aliases": [ 00:20:54.866 "a5ed196b-6c98-4c7a-b70c-8e0601a74902" 00:20:54.866 ], 00:20:54.866 "product_name": "Raid Volume", 00:20:54.866 "block_size": 512, 00:20:54.866 "num_blocks": 63488, 00:20:54.866 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:20:54.866 "assigned_rate_limits": { 00:20:54.866 "rw_ios_per_sec": 0, 00:20:54.866 "rw_mbytes_per_sec": 0, 00:20:54.866 "r_mbytes_per_sec": 0, 00:20:54.866 "w_mbytes_per_sec": 0 00:20:54.866 }, 00:20:54.866 "claimed": false, 00:20:54.866 "zoned": false, 00:20:54.866 "supported_io_types": { 00:20:54.866 "read": true, 00:20:54.866 "write": true, 00:20:54.866 "unmap": false, 00:20:54.866 "flush": false, 00:20:54.866 "reset": true, 00:20:54.866 "nvme_admin": false, 00:20:54.866 "nvme_io": false, 00:20:54.866 "nvme_io_md": false, 00:20:54.866 "write_zeroes": true, 00:20:54.866 "zcopy": false, 00:20:54.866 "get_zone_info": false, 00:20:54.866 "zone_management": false, 00:20:54.866 "zone_append": false, 00:20:54.866 "compare": false, 00:20:54.866 "compare_and_write": false, 00:20:54.866 "abort": false, 00:20:54.866 "seek_hole": false, 00:20:54.866 "seek_data": false, 00:20:54.866 "copy": false, 00:20:54.866 "nvme_iov_md": false 00:20:54.866 }, 00:20:54.866 "memory_domains": [ 00:20:54.866 { 00:20:54.866 "dma_device_id": "system", 00:20:54.866 "dma_device_type": 1 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.866 "dma_device_type": 2 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "dma_device_id": "system", 00:20:54.866 "dma_device_type": 1 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.866 "dma_device_type": 2 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "dma_device_id": "system", 00:20:54.866 "dma_device_type": 1 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.866 "dma_device_type": 2 00:20:54.866 } 00:20:54.866 ], 00:20:54.866 "driver_specific": { 00:20:54.866 "raid": { 00:20:54.866 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:20:54.866 "strip_size_kb": 0, 00:20:54.866 "state": "online", 00:20:54.866 "raid_level": "raid1", 00:20:54.866 "superblock": true, 00:20:54.866 "num_base_bdevs": 3, 00:20:54.866 "num_base_bdevs_discovered": 3, 00:20:54.866 "num_base_bdevs_operational": 3, 00:20:54.866 "base_bdevs_list": [ 00:20:54.866 { 00:20:54.866 "name": "pt1", 00:20:54.866 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:54.866 "is_configured": true, 00:20:54.866 "data_offset": 2048, 00:20:54.866 "data_size": 63488 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "name": "pt2", 00:20:54.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:54.866 "is_configured": true, 00:20:54.866 "data_offset": 2048, 00:20:54.866 "data_size": 63488 00:20:54.866 }, 00:20:54.866 { 00:20:54.866 "name": "pt3", 00:20:54.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:54.866 "is_configured": true, 00:20:54.866 "data_offset": 2048, 00:20:54.866 "data_size": 63488 00:20:54.866 } 00:20:54.866 ] 00:20:54.866 } 00:20:54.866 } 00:20:54.866 }' 00:20:54.866 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:54.866 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:54.866 pt2 00:20:54.866 pt3' 00:20:54.866 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:54.866 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:54.866 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.125 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.125 "name": "pt1", 00:20:55.125 "aliases": [ 00:20:55.125 "00000000-0000-0000-0000-000000000001" 00:20:55.125 ], 00:20:55.125 "product_name": "passthru", 00:20:55.125 "block_size": 512, 00:20:55.125 "num_blocks": 65536, 00:20:55.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:55.125 "assigned_rate_limits": { 00:20:55.125 "rw_ios_per_sec": 0, 00:20:55.125 "rw_mbytes_per_sec": 0, 00:20:55.125 "r_mbytes_per_sec": 0, 00:20:55.125 "w_mbytes_per_sec": 0 00:20:55.125 }, 00:20:55.125 "claimed": true, 00:20:55.125 "claim_type": "exclusive_write", 00:20:55.125 "zoned": false, 00:20:55.125 "supported_io_types": { 00:20:55.125 "read": true, 00:20:55.125 "write": true, 00:20:55.125 "unmap": true, 00:20:55.125 "flush": true, 00:20:55.125 "reset": true, 00:20:55.125 "nvme_admin": false, 00:20:55.125 "nvme_io": false, 00:20:55.125 "nvme_io_md": false, 00:20:55.125 "write_zeroes": true, 00:20:55.125 "zcopy": true, 00:20:55.125 "get_zone_info": false, 00:20:55.125 "zone_management": false, 00:20:55.125 "zone_append": false, 00:20:55.125 "compare": false, 00:20:55.125 "compare_and_write": false, 00:20:55.125 "abort": true, 00:20:55.125 "seek_hole": false, 00:20:55.125 "seek_data": false, 00:20:55.125 "copy": true, 00:20:55.125 "nvme_iov_md": false 00:20:55.125 }, 00:20:55.125 "memory_domains": [ 00:20:55.125 { 00:20:55.125 "dma_device_id": "system", 00:20:55.125 "dma_device_type": 1 00:20:55.125 }, 00:20:55.125 { 00:20:55.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.125 "dma_device_type": 2 00:20:55.125 } 00:20:55.125 ], 00:20:55.125 "driver_specific": { 00:20:55.125 "passthru": { 00:20:55.125 "name": "pt1", 00:20:55.125 "base_bdev_name": "malloc1" 00:20:55.125 } 00:20:55.125 } 00:20:55.125 }' 00:20:55.125 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.125 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.125 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.125 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.125 16:37:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:55.383 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.642 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.642 "name": "pt2", 00:20:55.642 "aliases": [ 00:20:55.642 "00000000-0000-0000-0000-000000000002" 00:20:55.642 ], 00:20:55.642 "product_name": "passthru", 00:20:55.642 "block_size": 512, 00:20:55.642 "num_blocks": 65536, 00:20:55.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:55.642 "assigned_rate_limits": { 00:20:55.642 "rw_ios_per_sec": 0, 00:20:55.642 "rw_mbytes_per_sec": 0, 00:20:55.642 "r_mbytes_per_sec": 0, 00:20:55.642 "w_mbytes_per_sec": 0 00:20:55.642 }, 00:20:55.642 "claimed": true, 00:20:55.642 "claim_type": "exclusive_write", 00:20:55.642 "zoned": false, 00:20:55.642 "supported_io_types": { 00:20:55.642 "read": true, 00:20:55.642 "write": true, 00:20:55.642 "unmap": true, 00:20:55.642 "flush": true, 00:20:55.642 "reset": true, 00:20:55.642 "nvme_admin": false, 00:20:55.642 "nvme_io": false, 00:20:55.642 "nvme_io_md": false, 00:20:55.642 "write_zeroes": true, 00:20:55.642 "zcopy": true, 00:20:55.642 "get_zone_info": false, 00:20:55.642 "zone_management": false, 00:20:55.642 "zone_append": false, 00:20:55.642 "compare": false, 00:20:55.642 "compare_and_write": false, 00:20:55.642 "abort": true, 00:20:55.642 "seek_hole": false, 00:20:55.642 "seek_data": false, 00:20:55.642 "copy": true, 00:20:55.642 "nvme_iov_md": false 00:20:55.642 }, 00:20:55.642 "memory_domains": [ 00:20:55.642 { 00:20:55.642 "dma_device_id": "system", 00:20:55.642 "dma_device_type": 1 00:20:55.642 }, 00:20:55.642 { 00:20:55.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.642 "dma_device_type": 2 00:20:55.642 } 00:20:55.642 ], 00:20:55.642 "driver_specific": { 00:20:55.642 "passthru": { 00:20:55.642 "name": "pt2", 00:20:55.642 "base_bdev_name": "malloc2" 00:20:55.642 } 00:20:55.642 } 00:20:55.642 }' 00:20:55.642 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.642 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:55.901 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.159 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.159 "name": "pt3", 00:20:56.159 "aliases": [ 00:20:56.159 "00000000-0000-0000-0000-000000000003" 00:20:56.159 ], 00:20:56.159 "product_name": "passthru", 00:20:56.159 "block_size": 512, 00:20:56.159 "num_blocks": 65536, 00:20:56.159 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:56.159 "assigned_rate_limits": { 00:20:56.159 "rw_ios_per_sec": 0, 00:20:56.159 "rw_mbytes_per_sec": 0, 00:20:56.159 "r_mbytes_per_sec": 0, 00:20:56.159 "w_mbytes_per_sec": 0 00:20:56.159 }, 00:20:56.159 "claimed": true, 00:20:56.159 "claim_type": "exclusive_write", 00:20:56.159 "zoned": false, 00:20:56.159 "supported_io_types": { 00:20:56.159 "read": true, 00:20:56.159 "write": true, 00:20:56.159 "unmap": true, 00:20:56.159 "flush": true, 00:20:56.159 "reset": true, 00:20:56.159 "nvme_admin": false, 00:20:56.159 "nvme_io": false, 00:20:56.159 "nvme_io_md": false, 00:20:56.159 "write_zeroes": true, 00:20:56.159 "zcopy": true, 00:20:56.159 "get_zone_info": false, 00:20:56.159 "zone_management": false, 00:20:56.159 "zone_append": false, 00:20:56.159 "compare": false, 00:20:56.159 "compare_and_write": false, 00:20:56.159 "abort": true, 00:20:56.159 "seek_hole": false, 00:20:56.159 "seek_data": false, 00:20:56.159 "copy": true, 00:20:56.159 "nvme_iov_md": false 00:20:56.159 }, 00:20:56.159 "memory_domains": [ 00:20:56.159 { 00:20:56.159 "dma_device_id": "system", 00:20:56.159 "dma_device_type": 1 00:20:56.159 }, 00:20:56.159 { 00:20:56.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.159 "dma_device_type": 2 00:20:56.159 } 00:20:56.159 ], 00:20:56.159 "driver_specific": { 00:20:56.159 "passthru": { 00:20:56.159 "name": "pt3", 00:20:56.159 "base_bdev_name": "malloc3" 00:20:56.159 } 00:20:56.159 } 00:20:56.159 }' 00:20:56.159 16:37:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.446 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.446 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.446 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.447 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.447 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.447 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.447 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.447 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.447 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.707 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.707 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.707 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:56.707 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:20:56.707 [2024-07-24 16:37:53.552965] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.965 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=a5ed196b-6c98-4c7a-b70c-8e0601a74902 00:20:56.965 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z a5ed196b-6c98-4c7a-b70c-8e0601a74902 ']' 00:20:56.965 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:56.965 [2024-07-24 16:37:53.785216] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.965 [2024-07-24 16:37:53.785250] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.965 [2024-07-24 16:37:53.785333] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.965 [2024-07-24 16:37:53.785422] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.965 [2024-07-24 16:37:53.785441] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041480 name raid_bdev1, state offline 00:20:56.965 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.965 16:37:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:20:57.224 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:20:57.224 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:20:57.224 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:57.224 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:57.483 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:57.483 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:57.742 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:57.742 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:58.001 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:58.001 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:58.259 16:37:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:58.518 [2024-07-24 16:37:55.136786] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:58.518 [2024-07-24 16:37:55.139133] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:58.518 [2024-07-24 16:37:55.139213] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:58.518 [2024-07-24 16:37:55.139274] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:58.518 [2024-07-24 16:37:55.139329] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:58.518 [2024-07-24 16:37:55.139358] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:58.518 [2024-07-24 16:37:55.139383] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.518 [2024-07-24 16:37:55.139402] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state configuring 00:20:58.518 request: 00:20:58.518 { 00:20:58.518 "name": "raid_bdev1", 00:20:58.518 "raid_level": "raid1", 00:20:58.518 "base_bdevs": [ 00:20:58.518 "malloc1", 00:20:58.518 "malloc2", 00:20:58.518 "malloc3" 00:20:58.518 ], 00:20:58.518 "superblock": false, 00:20:58.518 "method": "bdev_raid_create", 00:20:58.518 "req_id": 1 00:20:58.518 } 00:20:58.518 Got JSON-RPC error response 00:20:58.518 response: 00:20:58.518 { 00:20:58.518 "code": -17, 00:20:58.518 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:58.518 } 00:20:58.518 16:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:58.518 16:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.518 16:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.518 16:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.518 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.518 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:58.777 [2024-07-24 16:37:55.597993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:58.777 [2024-07-24 16:37:55.598058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.777 [2024-07-24 16:37:55.598089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:20:58.777 [2024-07-24 16:37:55.598104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.777 [2024-07-24 16:37:55.600930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.777 [2024-07-24 16:37:55.600965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:58.777 [2024-07-24 16:37:55.601067] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:58.777 [2024-07-24 16:37:55.601135] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:58.777 pt1 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.777 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.036 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.036 "name": "raid_bdev1", 00:20:59.036 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:20:59.036 "strip_size_kb": 0, 00:20:59.036 "state": "configuring", 00:20:59.036 "raid_level": "raid1", 00:20:59.036 "superblock": true, 00:20:59.036 "num_base_bdevs": 3, 00:20:59.036 "num_base_bdevs_discovered": 1, 00:20:59.036 "num_base_bdevs_operational": 3, 00:20:59.036 "base_bdevs_list": [ 00:20:59.036 { 00:20:59.036 "name": "pt1", 00:20:59.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:59.036 "is_configured": true, 00:20:59.036 "data_offset": 2048, 00:20:59.036 "data_size": 63488 00:20:59.036 }, 00:20:59.036 { 00:20:59.036 "name": null, 00:20:59.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:59.036 "is_configured": false, 00:20:59.036 "data_offset": 2048, 00:20:59.036 "data_size": 63488 00:20:59.036 }, 00:20:59.036 { 00:20:59.036 "name": null, 00:20:59.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:59.036 "is_configured": false, 00:20:59.036 "data_offset": 2048, 00:20:59.036 "data_size": 63488 00:20:59.036 } 00:20:59.036 ] 00:20:59.036 }' 00:20:59.036 16:37:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.036 16:37:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.603 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:20:59.603 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:59.862 [2024-07-24 16:37:56.608703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:59.862 [2024-07-24 16:37:56.608762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.862 [2024-07-24 16:37:56.608790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:20:59.862 [2024-07-24 16:37:56.608805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.862 [2024-07-24 16:37:56.609393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.862 [2024-07-24 16:37:56.609419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:59.862 [2024-07-24 16:37:56.609513] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:59.862 [2024-07-24 16:37:56.609544] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:59.862 pt2 00:20:59.862 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:00.121 [2024-07-24 16:37:56.837368] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.121 16:37:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.379 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.379 "name": "raid_bdev1", 00:21:00.379 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:00.379 "strip_size_kb": 0, 00:21:00.379 "state": "configuring", 00:21:00.379 "raid_level": "raid1", 00:21:00.379 "superblock": true, 00:21:00.379 "num_base_bdevs": 3, 00:21:00.379 "num_base_bdevs_discovered": 1, 00:21:00.379 "num_base_bdevs_operational": 3, 00:21:00.379 "base_bdevs_list": [ 00:21:00.379 { 00:21:00.379 "name": "pt1", 00:21:00.379 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:00.379 "is_configured": true, 00:21:00.379 "data_offset": 2048, 00:21:00.379 "data_size": 63488 00:21:00.379 }, 00:21:00.379 { 00:21:00.379 "name": null, 00:21:00.379 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:00.379 "is_configured": false, 00:21:00.379 "data_offset": 2048, 00:21:00.379 "data_size": 63488 00:21:00.379 }, 00:21:00.380 { 00:21:00.380 "name": null, 00:21:00.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:00.380 "is_configured": false, 00:21:00.380 "data_offset": 2048, 00:21:00.380 "data_size": 63488 00:21:00.380 } 00:21:00.380 ] 00:21:00.380 }' 00:21:00.380 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.380 16:37:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.947 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:21:00.947 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:00.947 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:01.206 [2024-07-24 16:37:57.864173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:01.206 [2024-07-24 16:37:57.864239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.206 [2024-07-24 16:37:57.864262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:21:01.206 [2024-07-24 16:37:57.864280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.206 [2024-07-24 16:37:57.864824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.206 [2024-07-24 16:37:57.864853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:01.206 [2024-07-24 16:37:57.864943] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:01.206 [2024-07-24 16:37:57.864976] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:01.206 pt2 00:21:01.206 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:01.206 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:01.206 16:37:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:01.464 [2024-07-24 16:37:58.092766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:01.464 [2024-07-24 16:37:58.092827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.464 [2024-07-24 16:37:58.092855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:21:01.465 [2024-07-24 16:37:58.092873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.465 [2024-07-24 16:37:58.093439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.465 [2024-07-24 16:37:58.093468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:01.465 [2024-07-24 16:37:58.093556] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:01.465 [2024-07-24 16:37:58.093589] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:01.465 [2024-07-24 16:37:58.093771] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:21:01.465 [2024-07-24 16:37:58.093789] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:01.465 [2024-07-24 16:37:58.094104] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:01.465 [2024-07-24 16:37:58.094357] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:21:01.465 [2024-07-24 16:37:58.094372] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:21:01.465 [2024-07-24 16:37:58.094546] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.465 pt3 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.465 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.723 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.723 "name": "raid_bdev1", 00:21:01.723 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:01.723 "strip_size_kb": 0, 00:21:01.723 "state": "online", 00:21:01.723 "raid_level": "raid1", 00:21:01.723 "superblock": true, 00:21:01.723 "num_base_bdevs": 3, 00:21:01.723 "num_base_bdevs_discovered": 3, 00:21:01.723 "num_base_bdevs_operational": 3, 00:21:01.723 "base_bdevs_list": [ 00:21:01.723 { 00:21:01.723 "name": "pt1", 00:21:01.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.723 "is_configured": true, 00:21:01.723 "data_offset": 2048, 00:21:01.723 "data_size": 63488 00:21:01.723 }, 00:21:01.723 { 00:21:01.723 "name": "pt2", 00:21:01.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.723 "is_configured": true, 00:21:01.723 "data_offset": 2048, 00:21:01.723 "data_size": 63488 00:21:01.723 }, 00:21:01.723 { 00:21:01.723 "name": "pt3", 00:21:01.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:01.723 "is_configured": true, 00:21:01.723 "data_offset": 2048, 00:21:01.723 "data_size": 63488 00:21:01.723 } 00:21:01.723 ] 00:21:01.723 }' 00:21:01.723 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.723 16:37:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:02.290 16:37:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:02.290 [2024-07-24 16:37:59.123889] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.290 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:02.290 "name": "raid_bdev1", 00:21:02.290 "aliases": [ 00:21:02.290 "a5ed196b-6c98-4c7a-b70c-8e0601a74902" 00:21:02.290 ], 00:21:02.290 "product_name": "Raid Volume", 00:21:02.290 "block_size": 512, 00:21:02.290 "num_blocks": 63488, 00:21:02.290 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:02.290 "assigned_rate_limits": { 00:21:02.290 "rw_ios_per_sec": 0, 00:21:02.290 "rw_mbytes_per_sec": 0, 00:21:02.290 "r_mbytes_per_sec": 0, 00:21:02.290 "w_mbytes_per_sec": 0 00:21:02.290 }, 00:21:02.290 "claimed": false, 00:21:02.290 "zoned": false, 00:21:02.290 "supported_io_types": { 00:21:02.290 "read": true, 00:21:02.290 "write": true, 00:21:02.290 "unmap": false, 00:21:02.290 "flush": false, 00:21:02.290 "reset": true, 00:21:02.290 "nvme_admin": false, 00:21:02.290 "nvme_io": false, 00:21:02.290 "nvme_io_md": false, 00:21:02.290 "write_zeroes": true, 00:21:02.290 "zcopy": false, 00:21:02.290 "get_zone_info": false, 00:21:02.290 "zone_management": false, 00:21:02.290 "zone_append": false, 00:21:02.290 "compare": false, 00:21:02.290 "compare_and_write": false, 00:21:02.290 "abort": false, 00:21:02.290 "seek_hole": false, 00:21:02.290 "seek_data": false, 00:21:02.290 "copy": false, 00:21:02.290 "nvme_iov_md": false 00:21:02.290 }, 00:21:02.290 "memory_domains": [ 00:21:02.290 { 00:21:02.290 "dma_device_id": "system", 00:21:02.290 "dma_device_type": 1 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.290 "dma_device_type": 2 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "dma_device_id": "system", 00:21:02.290 "dma_device_type": 1 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.290 "dma_device_type": 2 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "dma_device_id": "system", 00:21:02.290 "dma_device_type": 1 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.290 "dma_device_type": 2 00:21:02.290 } 00:21:02.290 ], 00:21:02.290 "driver_specific": { 00:21:02.290 "raid": { 00:21:02.290 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:02.290 "strip_size_kb": 0, 00:21:02.290 "state": "online", 00:21:02.290 "raid_level": "raid1", 00:21:02.290 "superblock": true, 00:21:02.290 "num_base_bdevs": 3, 00:21:02.290 "num_base_bdevs_discovered": 3, 00:21:02.290 "num_base_bdevs_operational": 3, 00:21:02.290 "base_bdevs_list": [ 00:21:02.290 { 00:21:02.290 "name": "pt1", 00:21:02.290 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.290 "is_configured": true, 00:21:02.290 "data_offset": 2048, 00:21:02.290 "data_size": 63488 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "name": "pt2", 00:21:02.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.290 "is_configured": true, 00:21:02.290 "data_offset": 2048, 00:21:02.290 "data_size": 63488 00:21:02.290 }, 00:21:02.290 { 00:21:02.290 "name": "pt3", 00:21:02.290 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.290 "is_configured": true, 00:21:02.290 "data_offset": 2048, 00:21:02.290 "data_size": 63488 00:21:02.290 } 00:21:02.290 ] 00:21:02.291 } 00:21:02.291 } 00:21:02.291 }' 00:21:02.291 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.550 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:02.550 pt2 00:21:02.550 pt3' 00:21:02.550 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:02.550 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:02.550 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:02.809 "name": "pt1", 00:21:02.809 "aliases": [ 00:21:02.809 "00000000-0000-0000-0000-000000000001" 00:21:02.809 ], 00:21:02.809 "product_name": "passthru", 00:21:02.809 "block_size": 512, 00:21:02.809 "num_blocks": 65536, 00:21:02.809 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.809 "assigned_rate_limits": { 00:21:02.809 "rw_ios_per_sec": 0, 00:21:02.809 "rw_mbytes_per_sec": 0, 00:21:02.809 "r_mbytes_per_sec": 0, 00:21:02.809 "w_mbytes_per_sec": 0 00:21:02.809 }, 00:21:02.809 "claimed": true, 00:21:02.809 "claim_type": "exclusive_write", 00:21:02.809 "zoned": false, 00:21:02.809 "supported_io_types": { 00:21:02.809 "read": true, 00:21:02.809 "write": true, 00:21:02.809 "unmap": true, 00:21:02.809 "flush": true, 00:21:02.809 "reset": true, 00:21:02.809 "nvme_admin": false, 00:21:02.809 "nvme_io": false, 00:21:02.809 "nvme_io_md": false, 00:21:02.809 "write_zeroes": true, 00:21:02.809 "zcopy": true, 00:21:02.809 "get_zone_info": false, 00:21:02.809 "zone_management": false, 00:21:02.809 "zone_append": false, 00:21:02.809 "compare": false, 00:21:02.809 "compare_and_write": false, 00:21:02.809 "abort": true, 00:21:02.809 "seek_hole": false, 00:21:02.809 "seek_data": false, 00:21:02.809 "copy": true, 00:21:02.809 "nvme_iov_md": false 00:21:02.809 }, 00:21:02.809 "memory_domains": [ 00:21:02.809 { 00:21:02.809 "dma_device_id": "system", 00:21:02.809 "dma_device_type": 1 00:21:02.809 }, 00:21:02.809 { 00:21:02.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.809 "dma_device_type": 2 00:21:02.809 } 00:21:02.809 ], 00:21:02.809 "driver_specific": { 00:21:02.809 "passthru": { 00:21:02.809 "name": "pt1", 00:21:02.809 "base_bdev_name": "malloc1" 00:21:02.809 } 00:21:02.809 } 00:21:02.809 }' 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:02.809 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:03.068 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.327 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.327 "name": "pt2", 00:21:03.327 "aliases": [ 00:21:03.327 "00000000-0000-0000-0000-000000000002" 00:21:03.327 ], 00:21:03.327 "product_name": "passthru", 00:21:03.327 "block_size": 512, 00:21:03.327 "num_blocks": 65536, 00:21:03.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.327 "assigned_rate_limits": { 00:21:03.327 "rw_ios_per_sec": 0, 00:21:03.327 "rw_mbytes_per_sec": 0, 00:21:03.327 "r_mbytes_per_sec": 0, 00:21:03.327 "w_mbytes_per_sec": 0 00:21:03.327 }, 00:21:03.327 "claimed": true, 00:21:03.327 "claim_type": "exclusive_write", 00:21:03.327 "zoned": false, 00:21:03.327 "supported_io_types": { 00:21:03.327 "read": true, 00:21:03.327 "write": true, 00:21:03.327 "unmap": true, 00:21:03.327 "flush": true, 00:21:03.327 "reset": true, 00:21:03.327 "nvme_admin": false, 00:21:03.327 "nvme_io": false, 00:21:03.327 "nvme_io_md": false, 00:21:03.327 "write_zeroes": true, 00:21:03.327 "zcopy": true, 00:21:03.327 "get_zone_info": false, 00:21:03.327 "zone_management": false, 00:21:03.327 "zone_append": false, 00:21:03.327 "compare": false, 00:21:03.327 "compare_and_write": false, 00:21:03.327 "abort": true, 00:21:03.327 "seek_hole": false, 00:21:03.327 "seek_data": false, 00:21:03.327 "copy": true, 00:21:03.327 "nvme_iov_md": false 00:21:03.327 }, 00:21:03.327 "memory_domains": [ 00:21:03.327 { 00:21:03.327 "dma_device_id": "system", 00:21:03.327 "dma_device_type": 1 00:21:03.327 }, 00:21:03.327 { 00:21:03.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.327 "dma_device_type": 2 00:21:03.327 } 00:21:03.327 ], 00:21:03.327 "driver_specific": { 00:21:03.327 "passthru": { 00:21:03.327 "name": "pt2", 00:21:03.327 "base_bdev_name": "malloc2" 00:21:03.327 } 00:21:03.327 } 00:21:03.327 }' 00:21:03.327 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.327 16:37:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.327 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.327 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.327 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.327 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.327 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.327 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:03.586 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.845 "name": "pt3", 00:21:03.845 "aliases": [ 00:21:03.845 "00000000-0000-0000-0000-000000000003" 00:21:03.845 ], 00:21:03.845 "product_name": "passthru", 00:21:03.845 "block_size": 512, 00:21:03.845 "num_blocks": 65536, 00:21:03.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.845 "assigned_rate_limits": { 00:21:03.845 "rw_ios_per_sec": 0, 00:21:03.845 "rw_mbytes_per_sec": 0, 00:21:03.845 "r_mbytes_per_sec": 0, 00:21:03.845 "w_mbytes_per_sec": 0 00:21:03.845 }, 00:21:03.845 "claimed": true, 00:21:03.845 "claim_type": "exclusive_write", 00:21:03.845 "zoned": false, 00:21:03.845 "supported_io_types": { 00:21:03.845 "read": true, 00:21:03.845 "write": true, 00:21:03.845 "unmap": true, 00:21:03.845 "flush": true, 00:21:03.845 "reset": true, 00:21:03.845 "nvme_admin": false, 00:21:03.845 "nvme_io": false, 00:21:03.845 "nvme_io_md": false, 00:21:03.845 "write_zeroes": true, 00:21:03.845 "zcopy": true, 00:21:03.845 "get_zone_info": false, 00:21:03.845 "zone_management": false, 00:21:03.845 "zone_append": false, 00:21:03.845 "compare": false, 00:21:03.845 "compare_and_write": false, 00:21:03.845 "abort": true, 00:21:03.845 "seek_hole": false, 00:21:03.845 "seek_data": false, 00:21:03.845 "copy": true, 00:21:03.845 "nvme_iov_md": false 00:21:03.845 }, 00:21:03.845 "memory_domains": [ 00:21:03.845 { 00:21:03.845 "dma_device_id": "system", 00:21:03.845 "dma_device_type": 1 00:21:03.845 }, 00:21:03.845 { 00:21:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.845 "dma_device_type": 2 00:21:03.845 } 00:21:03.845 ], 00:21:03.845 "driver_specific": { 00:21:03.845 "passthru": { 00:21:03.845 "name": "pt3", 00:21:03.845 "base_bdev_name": "malloc3" 00:21:03.845 } 00:21:03.845 } 00:21:03.845 }' 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.845 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.104 16:38:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:21:04.363 [2024-07-24 16:38:01.065211] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.363 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' a5ed196b-6c98-4c7a-b70c-8e0601a74902 '!=' a5ed196b-6c98-4c7a-b70c-8e0601a74902 ']' 00:21:04.363 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:21:04.363 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:04.363 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:04.363 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:04.622 [2024-07-24 16:38:01.293519] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.622 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.881 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:04.881 "name": "raid_bdev1", 00:21:04.881 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:04.881 "strip_size_kb": 0, 00:21:04.881 "state": "online", 00:21:04.881 "raid_level": "raid1", 00:21:04.881 "superblock": true, 00:21:04.881 "num_base_bdevs": 3, 00:21:04.881 "num_base_bdevs_discovered": 2, 00:21:04.881 "num_base_bdevs_operational": 2, 00:21:04.881 "base_bdevs_list": [ 00:21:04.881 { 00:21:04.881 "name": null, 00:21:04.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.881 "is_configured": false, 00:21:04.881 "data_offset": 2048, 00:21:04.881 "data_size": 63488 00:21:04.881 }, 00:21:04.881 { 00:21:04.881 "name": "pt2", 00:21:04.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.881 "is_configured": true, 00:21:04.881 "data_offset": 2048, 00:21:04.881 "data_size": 63488 00:21:04.881 }, 00:21:04.881 { 00:21:04.881 "name": "pt3", 00:21:04.881 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.881 "is_configured": true, 00:21:04.881 "data_offset": 2048, 00:21:04.881 "data_size": 63488 00:21:04.881 } 00:21:04.881 ] 00:21:04.881 }' 00:21:04.881 16:38:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:04.881 16:38:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.448 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:05.448 [2024-07-24 16:38:02.236005] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.448 [2024-07-24 16:38:02.236039] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.448 [2024-07-24 16:38:02.236119] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.448 [2024-07-24 16:38:02.236196] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.448 [2024-07-24 16:38:02.236216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:21:05.448 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.448 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:21:05.707 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:21:05.707 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:21:05.707 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:05.707 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:05.707 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:05.966 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:05.966 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:05.966 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:06.224 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:06.224 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:06.224 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:21:06.224 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:06.224 16:38:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:06.483 [2024-07-24 16:38:03.106307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:06.483 [2024-07-24 16:38:03.106371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.483 [2024-07-24 16:38:03.106395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:21:06.483 [2024-07-24 16:38:03.106413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.483 [2024-07-24 16:38:03.109190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.483 [2024-07-24 16:38:03.109225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:06.483 [2024-07-24 16:38:03.109313] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:06.483 [2024-07-24 16:38:03.109365] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:06.483 pt2 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.483 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.484 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.742 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.742 "name": "raid_bdev1", 00:21:06.742 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:06.742 "strip_size_kb": 0, 00:21:06.742 "state": "configuring", 00:21:06.742 "raid_level": "raid1", 00:21:06.742 "superblock": true, 00:21:06.742 "num_base_bdevs": 3, 00:21:06.742 "num_base_bdevs_discovered": 1, 00:21:06.742 "num_base_bdevs_operational": 2, 00:21:06.742 "base_bdevs_list": [ 00:21:06.742 { 00:21:06.742 "name": null, 00:21:06.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.742 "is_configured": false, 00:21:06.742 "data_offset": 2048, 00:21:06.742 "data_size": 63488 00:21:06.742 }, 00:21:06.742 { 00:21:06.742 "name": "pt2", 00:21:06.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.742 "is_configured": true, 00:21:06.742 "data_offset": 2048, 00:21:06.742 "data_size": 63488 00:21:06.742 }, 00:21:06.742 { 00:21:06.742 "name": null, 00:21:06.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:06.742 "is_configured": false, 00:21:06.742 "data_offset": 2048, 00:21:06.742 "data_size": 63488 00:21:06.742 } 00:21:06.742 ] 00:21:06.742 }' 00:21:06.742 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.742 16:38:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.309 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:21:07.309 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:07.309 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:07.309 16:38:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:07.309 [2024-07-24 16:38:04.093001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:07.309 [2024-07-24 16:38:04.093071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.309 [2024-07-24 16:38:04.093098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:21:07.309 [2024-07-24 16:38:04.093117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.309 [2024-07-24 16:38:04.093701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.309 [2024-07-24 16:38:04.093732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:07.309 [2024-07-24 16:38:04.093824] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:07.309 [2024-07-24 16:38:04.093853] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:07.309 [2024-07-24 16:38:04.094007] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:21:07.309 [2024-07-24 16:38:04.094031] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:07.309 [2024-07-24 16:38:04.094350] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:07.309 [2024-07-24 16:38:04.094577] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:21:07.309 [2024-07-24 16:38:04.094592] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:21:07.309 [2024-07-24 16:38:04.094780] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.309 pt3 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.309 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.310 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.310 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.569 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.569 "name": "raid_bdev1", 00:21:07.569 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:07.569 "strip_size_kb": 0, 00:21:07.569 "state": "online", 00:21:07.569 "raid_level": "raid1", 00:21:07.569 "superblock": true, 00:21:07.569 "num_base_bdevs": 3, 00:21:07.569 "num_base_bdevs_discovered": 2, 00:21:07.569 "num_base_bdevs_operational": 2, 00:21:07.569 "base_bdevs_list": [ 00:21:07.569 { 00:21:07.569 "name": null, 00:21:07.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.569 "is_configured": false, 00:21:07.569 "data_offset": 2048, 00:21:07.569 "data_size": 63488 00:21:07.569 }, 00:21:07.569 { 00:21:07.569 "name": "pt2", 00:21:07.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.569 "is_configured": true, 00:21:07.569 "data_offset": 2048, 00:21:07.569 "data_size": 63488 00:21:07.569 }, 00:21:07.569 { 00:21:07.569 "name": "pt3", 00:21:07.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.569 "is_configured": true, 00:21:07.569 "data_offset": 2048, 00:21:07.569 "data_size": 63488 00:21:07.569 } 00:21:07.569 ] 00:21:07.569 }' 00:21:07.569 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.569 16:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.136 16:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:08.395 [2024-07-24 16:38:05.079648] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.395 [2024-07-24 16:38:05.079684] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.395 [2024-07-24 16:38:05.079758] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.395 [2024-07-24 16:38:05.079831] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.395 [2024-07-24 16:38:05.079846] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:21:08.395 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.395 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:21:08.655 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:21:08.655 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:21:08.655 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:21:08.655 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:21:08.655 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:08.913 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:08.913 [2024-07-24 16:38:05.761445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:08.913 [2024-07-24 16:38:05.761509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.913 [2024-07-24 16:38:05.761539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:21:08.913 [2024-07-24 16:38:05.761554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.913 [2024-07-24 16:38:05.764316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.913 [2024-07-24 16:38:05.764349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:08.913 [2024-07-24 16:38:05.764446] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:08.913 [2024-07-24 16:38:05.764494] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:08.913 [2024-07-24 16:38:05.764674] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:08.913 [2024-07-24 16:38:05.764694] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:08.913 [2024-07-24 16:38:05.764716] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044480 name raid_bdev1, state configuring 00:21:08.913 [2024-07-24 16:38:05.764801] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.913 pt1 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.172 16:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.172 16:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.172 "name": "raid_bdev1", 00:21:09.172 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:09.172 "strip_size_kb": 0, 00:21:09.172 "state": "configuring", 00:21:09.172 "raid_level": "raid1", 00:21:09.172 "superblock": true, 00:21:09.172 "num_base_bdevs": 3, 00:21:09.172 "num_base_bdevs_discovered": 1, 00:21:09.172 "num_base_bdevs_operational": 2, 00:21:09.172 "base_bdevs_list": [ 00:21:09.172 { 00:21:09.172 "name": null, 00:21:09.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.172 "is_configured": false, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 }, 00:21:09.172 { 00:21:09.172 "name": "pt2", 00:21:09.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:09.172 "is_configured": true, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 }, 00:21:09.172 { 00:21:09.172 "name": null, 00:21:09.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:09.172 "is_configured": false, 00:21:09.172 "data_offset": 2048, 00:21:09.172 "data_size": 63488 00:21:09.172 } 00:21:09.172 ] 00:21:09.172 }' 00:21:09.172 16:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.172 16:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.740 16:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:09.740 16:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:09.998 16:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:21:09.998 16:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:10.258 [2024-07-24 16:38:07.028883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:10.258 [2024-07-24 16:38:07.028947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:10.258 [2024-07-24 16:38:07.028975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:21:10.258 [2024-07-24 16:38:07.028991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:10.258 [2024-07-24 16:38:07.029568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:10.258 [2024-07-24 16:38:07.029594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:10.258 [2024-07-24 16:38:07.029687] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:10.258 [2024-07-24 16:38:07.029713] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:10.258 [2024-07-24 16:38:07.029881] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000044780 00:21:10.258 [2024-07-24 16:38:07.029896] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:10.258 [2024-07-24 16:38:07.030211] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:21:10.258 [2024-07-24 16:38:07.030453] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000044780 00:21:10.258 [2024-07-24 16:38:07.030470] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000044780 00:21:10.258 [2024-07-24 16:38:07.030652] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.258 pt3 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.258 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.584 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.584 "name": "raid_bdev1", 00:21:10.584 "uuid": "a5ed196b-6c98-4c7a-b70c-8e0601a74902", 00:21:10.584 "strip_size_kb": 0, 00:21:10.584 "state": "online", 00:21:10.584 "raid_level": "raid1", 00:21:10.584 "superblock": true, 00:21:10.584 "num_base_bdevs": 3, 00:21:10.584 "num_base_bdevs_discovered": 2, 00:21:10.584 "num_base_bdevs_operational": 2, 00:21:10.584 "base_bdevs_list": [ 00:21:10.584 { 00:21:10.584 "name": null, 00:21:10.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.584 "is_configured": false, 00:21:10.584 "data_offset": 2048, 00:21:10.584 "data_size": 63488 00:21:10.584 }, 00:21:10.584 { 00:21:10.584 "name": "pt2", 00:21:10.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.584 "is_configured": true, 00:21:10.584 "data_offset": 2048, 00:21:10.584 "data_size": 63488 00:21:10.584 }, 00:21:10.584 { 00:21:10.584 "name": "pt3", 00:21:10.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.584 "is_configured": true, 00:21:10.584 "data_offset": 2048, 00:21:10.584 "data_size": 63488 00:21:10.584 } 00:21:10.584 ] 00:21:10.584 }' 00:21:10.584 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.584 16:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.152 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:11.152 16:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:21:11.721 [2024-07-24 16:38:08.501218] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' a5ed196b-6c98-4c7a-b70c-8e0601a74902 '!=' a5ed196b-6c98-4c7a-b70c-8e0601a74902 ']' 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1671390 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1671390 ']' 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1671390 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1671390 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1671390' 00:21:11.721 killing process with pid 1671390 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1671390 00:21:11.721 [2024-07-24 16:38:08.562963] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:11.721 [2024-07-24 16:38:08.563056] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.721 16:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1671390 00:21:11.721 [2024-07-24 16:38:08.563129] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.721 [2024-07-24 16:38:08.563159] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044780 name raid_bdev1, state offline 00:21:12.289 [2024-07-24 16:38:08.890864] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:14.196 16:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:21:14.196 00:21:14.196 real 0m22.990s 00:21:14.196 user 0m39.981s 00:21:14.196 sys 0m3.965s 00:21:14.196 16:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:14.196 16:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.196 ************************************ 00:21:14.196 END TEST raid_superblock_test 00:21:14.196 ************************************ 00:21:14.196 16:38:10 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:14.196 16:38:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:14.196 16:38:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:14.196 16:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:14.196 ************************************ 00:21:14.196 START TEST raid_read_error_test 00:21:14.196 ************************************ 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.XDSq8C0BaA 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1675707 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1675707 /var/tmp/spdk-raid.sock 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1675707 ']' 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:14.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.196 16:38:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.196 [2024-07-24 16:38:10.815518] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:21:14.196 [2024-07-24 16:38:10.815637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1675707 ] 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:14.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:14.196 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:14.196 [2024-07-24 16:38:11.028861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.455 [2024-07-24 16:38:11.313853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.023 [2024-07-24 16:38:11.660276] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.023 [2024-07-24 16:38:11.660314] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:15.282 16:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.282 16:38:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:15.282 16:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:15.282 16:38:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:15.541 BaseBdev1_malloc 00:21:15.541 16:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:15.800 true 00:21:15.800 16:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:15.800 [2024-07-24 16:38:12.659854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:15.800 [2024-07-24 16:38:12.659918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:15.800 [2024-07-24 16:38:12.659945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:15.800 [2024-07-24 16:38:12.659968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.060 [2024-07-24 16:38:12.662768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.060 [2024-07-24 16:38:12.662806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:16.060 BaseBdev1 00:21:16.060 16:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:16.060 16:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:16.319 BaseBdev2_malloc 00:21:16.319 16:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:16.319 true 00:21:16.319 16:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:16.578 [2024-07-24 16:38:13.358751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:16.578 [2024-07-24 16:38:13.358811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.578 [2024-07-24 16:38:13.358837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:16.578 [2024-07-24 16:38:13.358858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.578 [2024-07-24 16:38:13.361628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.578 [2024-07-24 16:38:13.361666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:16.578 BaseBdev2 00:21:16.578 16:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:16.579 16:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:16.843 BaseBdev3_malloc 00:21:16.843 16:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:17.107 true 00:21:17.107 16:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:17.366 [2024-07-24 16:38:14.056366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:17.366 [2024-07-24 16:38:14.056424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.366 [2024-07-24 16:38:14.056451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:17.366 [2024-07-24 16:38:14.056473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.366 [2024-07-24 16:38:14.059247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.366 [2024-07-24 16:38:14.059285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:17.366 BaseBdev3 00:21:17.366 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:17.626 [2024-07-24 16:38:14.272994] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:17.626 [2024-07-24 16:38:14.275372] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.626 [2024-07-24 16:38:14.275463] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:17.626 [2024-07-24 16:38:14.275747] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:21:17.626 [2024-07-24 16:38:14.275764] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:17.626 [2024-07-24 16:38:14.276099] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:17.626 [2024-07-24 16:38:14.276366] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:21:17.626 [2024-07-24 16:38:14.276389] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:21:17.626 [2024-07-24 16:38:14.276602] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.626 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.885 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.885 "name": "raid_bdev1", 00:21:17.885 "uuid": "efe847dd-2e89-406a-8486-1aa08aa9ecaa", 00:21:17.885 "strip_size_kb": 0, 00:21:17.885 "state": "online", 00:21:17.885 "raid_level": "raid1", 00:21:17.885 "superblock": true, 00:21:17.885 "num_base_bdevs": 3, 00:21:17.885 "num_base_bdevs_discovered": 3, 00:21:17.885 "num_base_bdevs_operational": 3, 00:21:17.885 "base_bdevs_list": [ 00:21:17.885 { 00:21:17.885 "name": "BaseBdev1", 00:21:17.885 "uuid": "80b2e470-2885-5856-81d9-3f37bdac3035", 00:21:17.885 "is_configured": true, 00:21:17.885 "data_offset": 2048, 00:21:17.885 "data_size": 63488 00:21:17.885 }, 00:21:17.885 { 00:21:17.885 "name": "BaseBdev2", 00:21:17.885 "uuid": "426d1c38-6032-50ac-b4a8-56ae04fb36bf", 00:21:17.885 "is_configured": true, 00:21:17.885 "data_offset": 2048, 00:21:17.885 "data_size": 63488 00:21:17.885 }, 00:21:17.885 { 00:21:17.885 "name": "BaseBdev3", 00:21:17.885 "uuid": "489c8a71-c452-5821-a3de-fb61c8b76171", 00:21:17.885 "is_configured": true, 00:21:17.885 "data_offset": 2048, 00:21:17.885 "data_size": 63488 00:21:17.885 } 00:21:17.885 ] 00:21:17.885 }' 00:21:17.885 16:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.885 16:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.454 16:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:18.454 16:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:18.454 [2024-07-24 16:38:15.169179] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:19.392 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.653 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.912 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.912 "name": "raid_bdev1", 00:21:19.912 "uuid": "efe847dd-2e89-406a-8486-1aa08aa9ecaa", 00:21:19.912 "strip_size_kb": 0, 00:21:19.912 "state": "online", 00:21:19.912 "raid_level": "raid1", 00:21:19.913 "superblock": true, 00:21:19.913 "num_base_bdevs": 3, 00:21:19.913 "num_base_bdevs_discovered": 3, 00:21:19.913 "num_base_bdevs_operational": 3, 00:21:19.913 "base_bdevs_list": [ 00:21:19.913 { 00:21:19.913 "name": "BaseBdev1", 00:21:19.913 "uuid": "80b2e470-2885-5856-81d9-3f37bdac3035", 00:21:19.913 "is_configured": true, 00:21:19.913 "data_offset": 2048, 00:21:19.913 "data_size": 63488 00:21:19.913 }, 00:21:19.913 { 00:21:19.913 "name": "BaseBdev2", 00:21:19.913 "uuid": "426d1c38-6032-50ac-b4a8-56ae04fb36bf", 00:21:19.913 "is_configured": true, 00:21:19.913 "data_offset": 2048, 00:21:19.913 "data_size": 63488 00:21:19.913 }, 00:21:19.913 { 00:21:19.913 "name": "BaseBdev3", 00:21:19.913 "uuid": "489c8a71-c452-5821-a3de-fb61c8b76171", 00:21:19.913 "is_configured": true, 00:21:19.913 "data_offset": 2048, 00:21:19.913 "data_size": 63488 00:21:19.913 } 00:21:19.913 ] 00:21:19.913 }' 00:21:19.913 16:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.913 16:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:20.480 [2024-07-24 16:38:17.294715] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:20.480 [2024-07-24 16:38:17.294760] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:20.480 [2024-07-24 16:38:17.298025] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:20.480 [2024-07-24 16:38:17.298079] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.480 [2024-07-24 16:38:17.298208] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:20.480 [2024-07-24 16:38:17.298224] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:21:20.480 0 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1675707 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1675707 ']' 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1675707 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.480 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675707 00:21:20.739 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:20.739 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:20.739 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675707' 00:21:20.739 killing process with pid 1675707 00:21:20.739 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1675707 00:21:20.739 [2024-07-24 16:38:17.409880] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.739 16:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1675707 00:21:20.998 [2024-07-24 16:38:17.636999] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.XDSq8C0BaA 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:22.906 00:21:22.906 real 0m8.735s 00:21:22.906 user 0m12.543s 00:21:22.906 sys 0m1.345s 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:22.906 16:38:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.906 ************************************ 00:21:22.906 END TEST raid_read_error_test 00:21:22.906 ************************************ 00:21:22.906 16:38:19 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:22.906 16:38:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:22.906 16:38:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:22.906 16:38:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.906 ************************************ 00:21:22.906 START TEST raid_write_error_test 00:21:22.906 ************************************ 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.OgTKEnuXgI 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1677218 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1677218 /var/tmp/spdk-raid.sock 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1677218 ']' 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.906 16:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.906 [2024-07-24 16:38:19.645685] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:21:22.906 [2024-07-24 16:38:19.645808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1677218 ] 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:23.165 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.165 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:23.165 [2024-07-24 16:38:19.873969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.423 [2024-07-24 16:38:20.151583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.682 [2024-07-24 16:38:20.482637] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.682 [2024-07-24 16:38:20.482679] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.941 16:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.941 16:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:23.941 16:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:23.941 16:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:24.199 BaseBdev1_malloc 00:21:24.199 16:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:24.457 true 00:21:24.457 16:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:24.715 [2024-07-24 16:38:21.373849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:24.715 [2024-07-24 16:38:21.373912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.715 [2024-07-24 16:38:21.373937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:24.715 [2024-07-24 16:38:21.373964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.715 [2024-07-24 16:38:21.376681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.715 [2024-07-24 16:38:21.376719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.715 BaseBdev1 00:21:24.715 16:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:24.715 16:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:24.973 BaseBdev2_malloc 00:21:24.973 16:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:25.231 true 00:21:25.232 16:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:25.490 [2024-07-24 16:38:22.102590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:25.490 [2024-07-24 16:38:22.102646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.490 [2024-07-24 16:38:22.102671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:25.490 [2024-07-24 16:38:22.102692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.490 [2024-07-24 16:38:22.105389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.490 [2024-07-24 16:38:22.105426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:25.490 BaseBdev2 00:21:25.490 16:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:25.490 16:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:25.749 BaseBdev3_malloc 00:21:25.749 16:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:26.046 true 00:21:26.046 16:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:26.046 [2024-07-24 16:38:22.832999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:26.046 [2024-07-24 16:38:22.833055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.046 [2024-07-24 16:38:22.833080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:26.046 [2024-07-24 16:38:22.833098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.046 [2024-07-24 16:38:22.835807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.046 [2024-07-24 16:38:22.835842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:26.046 BaseBdev3 00:21:26.046 16:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:26.328 [2024-07-24 16:38:23.053618] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:26.328 [2024-07-24 16:38:23.055933] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:26.328 [2024-07-24 16:38:23.056019] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.328 [2024-07-24 16:38:23.056301] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041d80 00:21:26.328 [2024-07-24 16:38:23.056318] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:26.328 [2024-07-24 16:38:23.056627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:26.328 [2024-07-24 16:38:23.056884] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041d80 00:21:26.329 [2024-07-24 16:38:23.056905] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041d80 00:21:26.329 [2024-07-24 16:38:23.057112] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.329 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.588 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.588 "name": "raid_bdev1", 00:21:26.588 "uuid": "e592618d-4043-464f-bb19-33aaa5b7de82", 00:21:26.588 "strip_size_kb": 0, 00:21:26.588 "state": "online", 00:21:26.588 "raid_level": "raid1", 00:21:26.588 "superblock": true, 00:21:26.588 "num_base_bdevs": 3, 00:21:26.588 "num_base_bdevs_discovered": 3, 00:21:26.588 "num_base_bdevs_operational": 3, 00:21:26.588 "base_bdevs_list": [ 00:21:26.588 { 00:21:26.588 "name": "BaseBdev1", 00:21:26.588 "uuid": "65e12c08-d748-5996-8b5f-e1069569dc1d", 00:21:26.588 "is_configured": true, 00:21:26.588 "data_offset": 2048, 00:21:26.588 "data_size": 63488 00:21:26.588 }, 00:21:26.588 { 00:21:26.588 "name": "BaseBdev2", 00:21:26.588 "uuid": "5f9e0484-2384-577e-a406-2d31af63e5dc", 00:21:26.588 "is_configured": true, 00:21:26.588 "data_offset": 2048, 00:21:26.588 "data_size": 63488 00:21:26.588 }, 00:21:26.588 { 00:21:26.588 "name": "BaseBdev3", 00:21:26.588 "uuid": "5b5f5130-fc9b-53de-8831-857a54836023", 00:21:26.588 "is_configured": true, 00:21:26.588 "data_offset": 2048, 00:21:26.588 "data_size": 63488 00:21:26.588 } 00:21:26.588 ] 00:21:26.588 }' 00:21:26.588 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.588 16:38:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.155 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:27.156 16:38:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:27.156 [2024-07-24 16:38:23.897704] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:28.093 16:38:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:28.352 [2024-07-24 16:38:25.008761] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:28.352 [2024-07-24 16:38:25.008823] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:28.352 [2024-07-24 16:38:25.009057] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000107e0 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.352 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.612 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.612 "name": "raid_bdev1", 00:21:28.612 "uuid": "e592618d-4043-464f-bb19-33aaa5b7de82", 00:21:28.612 "strip_size_kb": 0, 00:21:28.612 "state": "online", 00:21:28.612 "raid_level": "raid1", 00:21:28.612 "superblock": true, 00:21:28.612 "num_base_bdevs": 3, 00:21:28.612 "num_base_bdevs_discovered": 2, 00:21:28.612 "num_base_bdevs_operational": 2, 00:21:28.612 "base_bdevs_list": [ 00:21:28.612 { 00:21:28.612 "name": null, 00:21:28.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.612 "is_configured": false, 00:21:28.612 "data_offset": 2048, 00:21:28.612 "data_size": 63488 00:21:28.612 }, 00:21:28.612 { 00:21:28.612 "name": "BaseBdev2", 00:21:28.612 "uuid": "5f9e0484-2384-577e-a406-2d31af63e5dc", 00:21:28.612 "is_configured": true, 00:21:28.612 "data_offset": 2048, 00:21:28.612 "data_size": 63488 00:21:28.612 }, 00:21:28.612 { 00:21:28.612 "name": "BaseBdev3", 00:21:28.612 "uuid": "5b5f5130-fc9b-53de-8831-857a54836023", 00:21:28.612 "is_configured": true, 00:21:28.612 "data_offset": 2048, 00:21:28.612 "data_size": 63488 00:21:28.612 } 00:21:28.612 ] 00:21:28.612 }' 00:21:28.612 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.612 16:38:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.181 16:38:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:29.181 [2024-07-24 16:38:26.031630] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.181 [2024-07-24 16:38:26.031680] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:29.181 [2024-07-24 16:38:26.034983] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:29.181 [2024-07-24 16:38:26.035039] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.181 [2024-07-24 16:38:26.035136] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:29.181 [2024-07-24 16:38:26.035168] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041d80 name raid_bdev1, state offline 00:21:29.181 0 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1677218 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1677218 ']' 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1677218 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1677218 00:21:29.440 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:29.441 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:29.441 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1677218' 00:21:29.441 killing process with pid 1677218 00:21:29.441 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1677218 00:21:29.441 [2024-07-24 16:38:26.108407] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.441 16:38:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1677218 00:21:29.700 [2024-07-24 16:38:26.337787] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.OgTKEnuXgI 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:31.602 00:21:31.602 real 0m8.603s 00:21:31.602 user 0m12.071s 00:21:31.602 sys 0m1.335s 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:31.602 16:38:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.602 ************************************ 00:21:31.602 END TEST raid_write_error_test 00:21:31.602 ************************************ 00:21:31.602 16:38:28 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:21:31.602 16:38:28 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:21:31.602 16:38:28 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:31.602 16:38:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:31.602 16:38:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:31.602 16:38:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:31.602 ************************************ 00:21:31.602 START TEST raid_state_function_test 00:21:31.602 ************************************ 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1678838 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1678838' 00:21:31.603 Process raid pid: 1678838 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1678838 /var/tmp/spdk-raid.sock 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1678838 ']' 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:31.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.603 16:38:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.603 [2024-07-24 16:38:28.318418] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:21:31.603 [2024-07-24 16:38:28.318535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:31.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.603 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:31.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.604 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:31.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.604 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:31.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:31.604 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:31.863 [2024-07-24 16:38:28.544551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.123 [2024-07-24 16:38:28.820279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.382 [2024-07-24 16:38:29.172305] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.382 [2024-07-24 16:38:29.172339] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.640 16:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.640 16:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:21:32.641 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:32.898 [2024-07-24 16:38:29.559557] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:32.898 [2024-07-24 16:38:29.559613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:32.898 [2024-07-24 16:38:29.559628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:32.898 [2024-07-24 16:38:29.559644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:32.898 [2024-07-24 16:38:29.559655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:32.898 [2024-07-24 16:38:29.559671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:32.898 [2024-07-24 16:38:29.559682] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:32.898 [2024-07-24 16:38:29.559697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:32.898 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.899 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:33.157 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:33.157 "name": "Existed_Raid", 00:21:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.157 "strip_size_kb": 64, 00:21:33.157 "state": "configuring", 00:21:33.157 "raid_level": "raid0", 00:21:33.157 "superblock": false, 00:21:33.157 "num_base_bdevs": 4, 00:21:33.157 "num_base_bdevs_discovered": 0, 00:21:33.157 "num_base_bdevs_operational": 4, 00:21:33.157 "base_bdevs_list": [ 00:21:33.157 { 00:21:33.157 "name": "BaseBdev1", 00:21:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.157 "is_configured": false, 00:21:33.157 "data_offset": 0, 00:21:33.157 "data_size": 0 00:21:33.157 }, 00:21:33.157 { 00:21:33.157 "name": "BaseBdev2", 00:21:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.157 "is_configured": false, 00:21:33.157 "data_offset": 0, 00:21:33.157 "data_size": 0 00:21:33.157 }, 00:21:33.157 { 00:21:33.157 "name": "BaseBdev3", 00:21:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.157 "is_configured": false, 00:21:33.157 "data_offset": 0, 00:21:33.157 "data_size": 0 00:21:33.157 }, 00:21:33.157 { 00:21:33.157 "name": "BaseBdev4", 00:21:33.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.157 "is_configured": false, 00:21:33.157 "data_offset": 0, 00:21:33.157 "data_size": 0 00:21:33.157 } 00:21:33.157 ] 00:21:33.157 }' 00:21:33.157 16:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:33.157 16:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:33.724 16:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:33.982 [2024-07-24 16:38:30.598197] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:33.982 [2024-07-24 16:38:30.598237] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:21:33.982 16:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:33.982 [2024-07-24 16:38:30.822850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:33.982 [2024-07-24 16:38:30.822895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:33.982 [2024-07-24 16:38:30.822909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:33.982 [2024-07-24 16:38:30.822932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:33.982 [2024-07-24 16:38:30.822944] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:33.982 [2024-07-24 16:38:30.822960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:33.982 [2024-07-24 16:38:30.822971] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:33.982 [2024-07-24 16:38:30.822987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:33.982 16:38:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:34.550 [2024-07-24 16:38:31.109322] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.550 BaseBdev1 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:34.550 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:34.809 [ 00:21:34.809 { 00:21:34.809 "name": "BaseBdev1", 00:21:34.809 "aliases": [ 00:21:34.809 "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb" 00:21:34.809 ], 00:21:34.809 "product_name": "Malloc disk", 00:21:34.809 "block_size": 512, 00:21:34.809 "num_blocks": 65536, 00:21:34.809 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:34.809 "assigned_rate_limits": { 00:21:34.809 "rw_ios_per_sec": 0, 00:21:34.809 "rw_mbytes_per_sec": 0, 00:21:34.809 "r_mbytes_per_sec": 0, 00:21:34.809 "w_mbytes_per_sec": 0 00:21:34.809 }, 00:21:34.809 "claimed": true, 00:21:34.809 "claim_type": "exclusive_write", 00:21:34.809 "zoned": false, 00:21:34.809 "supported_io_types": { 00:21:34.809 "read": true, 00:21:34.809 "write": true, 00:21:34.809 "unmap": true, 00:21:34.809 "flush": true, 00:21:34.809 "reset": true, 00:21:34.809 "nvme_admin": false, 00:21:34.809 "nvme_io": false, 00:21:34.809 "nvme_io_md": false, 00:21:34.809 "write_zeroes": true, 00:21:34.809 "zcopy": true, 00:21:34.809 "get_zone_info": false, 00:21:34.809 "zone_management": false, 00:21:34.809 "zone_append": false, 00:21:34.809 "compare": false, 00:21:34.809 "compare_and_write": false, 00:21:34.809 "abort": true, 00:21:34.809 "seek_hole": false, 00:21:34.809 "seek_data": false, 00:21:34.809 "copy": true, 00:21:34.809 "nvme_iov_md": false 00:21:34.809 }, 00:21:34.809 "memory_domains": [ 00:21:34.809 { 00:21:34.809 "dma_device_id": "system", 00:21:34.809 "dma_device_type": 1 00:21:34.809 }, 00:21:34.809 { 00:21:34.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.809 "dma_device_type": 2 00:21:34.809 } 00:21:34.809 ], 00:21:34.809 "driver_specific": {} 00:21:34.809 } 00:21:34.809 ] 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.809 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.068 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.068 "name": "Existed_Raid", 00:21:35.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.068 "strip_size_kb": 64, 00:21:35.068 "state": "configuring", 00:21:35.068 "raid_level": "raid0", 00:21:35.068 "superblock": false, 00:21:35.068 "num_base_bdevs": 4, 00:21:35.068 "num_base_bdevs_discovered": 1, 00:21:35.068 "num_base_bdevs_operational": 4, 00:21:35.068 "base_bdevs_list": [ 00:21:35.068 { 00:21:35.068 "name": "BaseBdev1", 00:21:35.068 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:35.068 "is_configured": true, 00:21:35.068 "data_offset": 0, 00:21:35.068 "data_size": 65536 00:21:35.068 }, 00:21:35.068 { 00:21:35.068 "name": "BaseBdev2", 00:21:35.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.068 "is_configured": false, 00:21:35.068 "data_offset": 0, 00:21:35.068 "data_size": 0 00:21:35.068 }, 00:21:35.068 { 00:21:35.068 "name": "BaseBdev3", 00:21:35.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.068 "is_configured": false, 00:21:35.068 "data_offset": 0, 00:21:35.068 "data_size": 0 00:21:35.068 }, 00:21:35.068 { 00:21:35.068 "name": "BaseBdev4", 00:21:35.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.068 "is_configured": false, 00:21:35.068 "data_offset": 0, 00:21:35.068 "data_size": 0 00:21:35.068 } 00:21:35.068 ] 00:21:35.068 }' 00:21:35.068 16:38:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.068 16:38:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.003 16:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:36.261 [2024-07-24 16:38:32.894187] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:36.261 [2024-07-24 16:38:32.894241] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:21:36.261 16:38:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:36.261 [2024-07-24 16:38:33.070751] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.261 [2024-07-24 16:38:33.073038] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:36.261 [2024-07-24 16:38:33.073081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:36.261 [2024-07-24 16:38:33.073095] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:36.261 [2024-07-24 16:38:33.073111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:36.261 [2024-07-24 16:38:33.073123] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:36.261 [2024-07-24 16:38:33.073150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.261 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.520 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.520 "name": "Existed_Raid", 00:21:36.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.520 "strip_size_kb": 64, 00:21:36.520 "state": "configuring", 00:21:36.520 "raid_level": "raid0", 00:21:36.520 "superblock": false, 00:21:36.520 "num_base_bdevs": 4, 00:21:36.520 "num_base_bdevs_discovered": 1, 00:21:36.520 "num_base_bdevs_operational": 4, 00:21:36.520 "base_bdevs_list": [ 00:21:36.520 { 00:21:36.520 "name": "BaseBdev1", 00:21:36.520 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:36.520 "is_configured": true, 00:21:36.520 "data_offset": 0, 00:21:36.520 "data_size": 65536 00:21:36.520 }, 00:21:36.520 { 00:21:36.520 "name": "BaseBdev2", 00:21:36.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.520 "is_configured": false, 00:21:36.520 "data_offset": 0, 00:21:36.520 "data_size": 0 00:21:36.520 }, 00:21:36.520 { 00:21:36.520 "name": "BaseBdev3", 00:21:36.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.520 "is_configured": false, 00:21:36.520 "data_offset": 0, 00:21:36.520 "data_size": 0 00:21:36.520 }, 00:21:36.520 { 00:21:36.520 "name": "BaseBdev4", 00:21:36.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.520 "is_configured": false, 00:21:36.520 "data_offset": 0, 00:21:36.520 "data_size": 0 00:21:36.520 } 00:21:36.520 ] 00:21:36.520 }' 00:21:36.520 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.520 16:38:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.087 16:38:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:37.346 [2024-07-24 16:38:33.995584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.346 BaseBdev2 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:37.346 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:37.604 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:37.604 [ 00:21:37.604 { 00:21:37.604 "name": "BaseBdev2", 00:21:37.604 "aliases": [ 00:21:37.604 "37bb8e19-4885-46a4-b28b-78e8df66e698" 00:21:37.604 ], 00:21:37.604 "product_name": "Malloc disk", 00:21:37.604 "block_size": 512, 00:21:37.604 "num_blocks": 65536, 00:21:37.604 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:37.604 "assigned_rate_limits": { 00:21:37.604 "rw_ios_per_sec": 0, 00:21:37.604 "rw_mbytes_per_sec": 0, 00:21:37.604 "r_mbytes_per_sec": 0, 00:21:37.604 "w_mbytes_per_sec": 0 00:21:37.604 }, 00:21:37.604 "claimed": true, 00:21:37.604 "claim_type": "exclusive_write", 00:21:37.604 "zoned": false, 00:21:37.604 "supported_io_types": { 00:21:37.604 "read": true, 00:21:37.604 "write": true, 00:21:37.604 "unmap": true, 00:21:37.604 "flush": true, 00:21:37.604 "reset": true, 00:21:37.604 "nvme_admin": false, 00:21:37.604 "nvme_io": false, 00:21:37.604 "nvme_io_md": false, 00:21:37.604 "write_zeroes": true, 00:21:37.604 "zcopy": true, 00:21:37.604 "get_zone_info": false, 00:21:37.604 "zone_management": false, 00:21:37.604 "zone_append": false, 00:21:37.604 "compare": false, 00:21:37.604 "compare_and_write": false, 00:21:37.604 "abort": true, 00:21:37.604 "seek_hole": false, 00:21:37.604 "seek_data": false, 00:21:37.604 "copy": true, 00:21:37.604 "nvme_iov_md": false 00:21:37.604 }, 00:21:37.604 "memory_domains": [ 00:21:37.604 { 00:21:37.604 "dma_device_id": "system", 00:21:37.604 "dma_device_type": 1 00:21:37.604 }, 00:21:37.604 { 00:21:37.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.605 "dma_device_type": 2 00:21:37.605 } 00:21:37.605 ], 00:21:37.605 "driver_specific": {} 00:21:37.605 } 00:21:37.605 ] 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.863 "name": "Existed_Raid", 00:21:37.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.863 "strip_size_kb": 64, 00:21:37.863 "state": "configuring", 00:21:37.863 "raid_level": "raid0", 00:21:37.863 "superblock": false, 00:21:37.863 "num_base_bdevs": 4, 00:21:37.863 "num_base_bdevs_discovered": 2, 00:21:37.863 "num_base_bdevs_operational": 4, 00:21:37.863 "base_bdevs_list": [ 00:21:37.863 { 00:21:37.863 "name": "BaseBdev1", 00:21:37.863 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:37.863 "is_configured": true, 00:21:37.863 "data_offset": 0, 00:21:37.863 "data_size": 65536 00:21:37.863 }, 00:21:37.863 { 00:21:37.863 "name": "BaseBdev2", 00:21:37.863 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:37.863 "is_configured": true, 00:21:37.863 "data_offset": 0, 00:21:37.863 "data_size": 65536 00:21:37.863 }, 00:21:37.863 { 00:21:37.863 "name": "BaseBdev3", 00:21:37.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.863 "is_configured": false, 00:21:37.863 "data_offset": 0, 00:21:37.863 "data_size": 0 00:21:37.863 }, 00:21:37.863 { 00:21:37.863 "name": "BaseBdev4", 00:21:37.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.863 "is_configured": false, 00:21:37.863 "data_offset": 0, 00:21:37.863 "data_size": 0 00:21:37.863 } 00:21:37.863 ] 00:21:37.863 }' 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.863 16:38:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.430 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:38.688 [2024-07-24 16:38:35.526190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:38.688 BaseBdev3 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:38.688 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.947 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.206 [ 00:21:39.206 { 00:21:39.206 "name": "BaseBdev3", 00:21:39.206 "aliases": [ 00:21:39.206 "f247e337-21f6-4af6-a3eb-bd88462363d1" 00:21:39.206 ], 00:21:39.206 "product_name": "Malloc disk", 00:21:39.206 "block_size": 512, 00:21:39.206 "num_blocks": 65536, 00:21:39.206 "uuid": "f247e337-21f6-4af6-a3eb-bd88462363d1", 00:21:39.206 "assigned_rate_limits": { 00:21:39.206 "rw_ios_per_sec": 0, 00:21:39.206 "rw_mbytes_per_sec": 0, 00:21:39.206 "r_mbytes_per_sec": 0, 00:21:39.206 "w_mbytes_per_sec": 0 00:21:39.206 }, 00:21:39.206 "claimed": true, 00:21:39.206 "claim_type": "exclusive_write", 00:21:39.206 "zoned": false, 00:21:39.206 "supported_io_types": { 00:21:39.206 "read": true, 00:21:39.206 "write": true, 00:21:39.206 "unmap": true, 00:21:39.206 "flush": true, 00:21:39.206 "reset": true, 00:21:39.206 "nvme_admin": false, 00:21:39.206 "nvme_io": false, 00:21:39.206 "nvme_io_md": false, 00:21:39.206 "write_zeroes": true, 00:21:39.206 "zcopy": true, 00:21:39.206 "get_zone_info": false, 00:21:39.206 "zone_management": false, 00:21:39.206 "zone_append": false, 00:21:39.206 "compare": false, 00:21:39.206 "compare_and_write": false, 00:21:39.206 "abort": true, 00:21:39.206 "seek_hole": false, 00:21:39.206 "seek_data": false, 00:21:39.206 "copy": true, 00:21:39.206 "nvme_iov_md": false 00:21:39.206 }, 00:21:39.206 "memory_domains": [ 00:21:39.206 { 00:21:39.206 "dma_device_id": "system", 00:21:39.206 "dma_device_type": 1 00:21:39.206 }, 00:21:39.206 { 00:21:39.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.206 "dma_device_type": 2 00:21:39.206 } 00:21:39.206 ], 00:21:39.206 "driver_specific": {} 00:21:39.206 } 00:21:39.206 ] 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.206 16:38:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.465 16:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.465 "name": "Existed_Raid", 00:21:39.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.465 "strip_size_kb": 64, 00:21:39.465 "state": "configuring", 00:21:39.465 "raid_level": "raid0", 00:21:39.465 "superblock": false, 00:21:39.465 "num_base_bdevs": 4, 00:21:39.465 "num_base_bdevs_discovered": 3, 00:21:39.465 "num_base_bdevs_operational": 4, 00:21:39.465 "base_bdevs_list": [ 00:21:39.465 { 00:21:39.465 "name": "BaseBdev1", 00:21:39.465 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:39.465 "is_configured": true, 00:21:39.465 "data_offset": 0, 00:21:39.465 "data_size": 65536 00:21:39.465 }, 00:21:39.465 { 00:21:39.465 "name": "BaseBdev2", 00:21:39.465 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:39.465 "is_configured": true, 00:21:39.465 "data_offset": 0, 00:21:39.465 "data_size": 65536 00:21:39.465 }, 00:21:39.465 { 00:21:39.465 "name": "BaseBdev3", 00:21:39.465 "uuid": "f247e337-21f6-4af6-a3eb-bd88462363d1", 00:21:39.465 "is_configured": true, 00:21:39.465 "data_offset": 0, 00:21:39.465 "data_size": 65536 00:21:39.465 }, 00:21:39.465 { 00:21:39.465 "name": "BaseBdev4", 00:21:39.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.465 "is_configured": false, 00:21:39.465 "data_offset": 0, 00:21:39.465 "data_size": 0 00:21:39.465 } 00:21:39.465 ] 00:21:39.465 }' 00:21:39.465 16:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.465 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.033 16:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:40.293 [2024-07-24 16:38:36.972265] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:40.293 [2024-07-24 16:38:36.972310] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:21:40.293 [2024-07-24 16:38:36.972324] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:40.293 [2024-07-24 16:38:36.972652] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:40.293 [2024-07-24 16:38:36.972901] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:21:40.293 [2024-07-24 16:38:36.972919] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:21:40.293 [2024-07-24 16:38:36.973231] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.293 BaseBdev4 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:40.293 16:38:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:40.552 16:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:40.811 [ 00:21:40.811 { 00:21:40.811 "name": "BaseBdev4", 00:21:40.811 "aliases": [ 00:21:40.811 "5ae56397-80bd-4a00-8ff3-2e38ce86eced" 00:21:40.811 ], 00:21:40.811 "product_name": "Malloc disk", 00:21:40.811 "block_size": 512, 00:21:40.811 "num_blocks": 65536, 00:21:40.812 "uuid": "5ae56397-80bd-4a00-8ff3-2e38ce86eced", 00:21:40.812 "assigned_rate_limits": { 00:21:40.812 "rw_ios_per_sec": 0, 00:21:40.812 "rw_mbytes_per_sec": 0, 00:21:40.812 "r_mbytes_per_sec": 0, 00:21:40.812 "w_mbytes_per_sec": 0 00:21:40.812 }, 00:21:40.812 "claimed": true, 00:21:40.812 "claim_type": "exclusive_write", 00:21:40.812 "zoned": false, 00:21:40.812 "supported_io_types": { 00:21:40.812 "read": true, 00:21:40.812 "write": true, 00:21:40.812 "unmap": true, 00:21:40.812 "flush": true, 00:21:40.812 "reset": true, 00:21:40.812 "nvme_admin": false, 00:21:40.812 "nvme_io": false, 00:21:40.812 "nvme_io_md": false, 00:21:40.812 "write_zeroes": true, 00:21:40.812 "zcopy": true, 00:21:40.812 "get_zone_info": false, 00:21:40.812 "zone_management": false, 00:21:40.812 "zone_append": false, 00:21:40.812 "compare": false, 00:21:40.812 "compare_and_write": false, 00:21:40.812 "abort": true, 00:21:40.812 "seek_hole": false, 00:21:40.812 "seek_data": false, 00:21:40.812 "copy": true, 00:21:40.812 "nvme_iov_md": false 00:21:40.812 }, 00:21:40.812 "memory_domains": [ 00:21:40.812 { 00:21:40.812 "dma_device_id": "system", 00:21:40.812 "dma_device_type": 1 00:21:40.812 }, 00:21:40.812 { 00:21:40.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.812 "dma_device_type": 2 00:21:40.812 } 00:21:40.812 ], 00:21:40.812 "driver_specific": {} 00:21:40.812 } 00:21:40.812 ] 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.812 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.087 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.087 "name": "Existed_Raid", 00:21:41.087 "uuid": "376d4681-0c2a-46c9-9499-a52bf299f366", 00:21:41.087 "strip_size_kb": 64, 00:21:41.087 "state": "online", 00:21:41.087 "raid_level": "raid0", 00:21:41.087 "superblock": false, 00:21:41.087 "num_base_bdevs": 4, 00:21:41.087 "num_base_bdevs_discovered": 4, 00:21:41.087 "num_base_bdevs_operational": 4, 00:21:41.088 "base_bdevs_list": [ 00:21:41.088 { 00:21:41.088 "name": "BaseBdev1", 00:21:41.088 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:41.088 "is_configured": true, 00:21:41.088 "data_offset": 0, 00:21:41.088 "data_size": 65536 00:21:41.088 }, 00:21:41.088 { 00:21:41.088 "name": "BaseBdev2", 00:21:41.088 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:41.088 "is_configured": true, 00:21:41.088 "data_offset": 0, 00:21:41.088 "data_size": 65536 00:21:41.088 }, 00:21:41.088 { 00:21:41.088 "name": "BaseBdev3", 00:21:41.088 "uuid": "f247e337-21f6-4af6-a3eb-bd88462363d1", 00:21:41.088 "is_configured": true, 00:21:41.088 "data_offset": 0, 00:21:41.088 "data_size": 65536 00:21:41.088 }, 00:21:41.088 { 00:21:41.088 "name": "BaseBdev4", 00:21:41.088 "uuid": "5ae56397-80bd-4a00-8ff3-2e38ce86eced", 00:21:41.088 "is_configured": true, 00:21:41.088 "data_offset": 0, 00:21:41.088 "data_size": 65536 00:21:41.088 } 00:21:41.088 ] 00:21:41.088 }' 00:21:41.088 16:38:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.088 16:38:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:41.694 [2024-07-24 16:38:38.460935] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.694 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:41.694 "name": "Existed_Raid", 00:21:41.694 "aliases": [ 00:21:41.694 "376d4681-0c2a-46c9-9499-a52bf299f366" 00:21:41.694 ], 00:21:41.694 "product_name": "Raid Volume", 00:21:41.694 "block_size": 512, 00:21:41.694 "num_blocks": 262144, 00:21:41.694 "uuid": "376d4681-0c2a-46c9-9499-a52bf299f366", 00:21:41.694 "assigned_rate_limits": { 00:21:41.694 "rw_ios_per_sec": 0, 00:21:41.694 "rw_mbytes_per_sec": 0, 00:21:41.694 "r_mbytes_per_sec": 0, 00:21:41.694 "w_mbytes_per_sec": 0 00:21:41.694 }, 00:21:41.694 "claimed": false, 00:21:41.694 "zoned": false, 00:21:41.694 "supported_io_types": { 00:21:41.694 "read": true, 00:21:41.694 "write": true, 00:21:41.694 "unmap": true, 00:21:41.694 "flush": true, 00:21:41.694 "reset": true, 00:21:41.694 "nvme_admin": false, 00:21:41.694 "nvme_io": false, 00:21:41.694 "nvme_io_md": false, 00:21:41.694 "write_zeroes": true, 00:21:41.694 "zcopy": false, 00:21:41.694 "get_zone_info": false, 00:21:41.694 "zone_management": false, 00:21:41.694 "zone_append": false, 00:21:41.694 "compare": false, 00:21:41.694 "compare_and_write": false, 00:21:41.694 "abort": false, 00:21:41.694 "seek_hole": false, 00:21:41.694 "seek_data": false, 00:21:41.694 "copy": false, 00:21:41.694 "nvme_iov_md": false 00:21:41.694 }, 00:21:41.694 "memory_domains": [ 00:21:41.694 { 00:21:41.694 "dma_device_id": "system", 00:21:41.694 "dma_device_type": 1 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.694 "dma_device_type": 2 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "system", 00:21:41.694 "dma_device_type": 1 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.694 "dma_device_type": 2 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "system", 00:21:41.694 "dma_device_type": 1 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.694 "dma_device_type": 2 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "system", 00:21:41.694 "dma_device_type": 1 00:21:41.694 }, 00:21:41.694 { 00:21:41.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.695 "dma_device_type": 2 00:21:41.695 } 00:21:41.695 ], 00:21:41.695 "driver_specific": { 00:21:41.695 "raid": { 00:21:41.695 "uuid": "376d4681-0c2a-46c9-9499-a52bf299f366", 00:21:41.695 "strip_size_kb": 64, 00:21:41.695 "state": "online", 00:21:41.695 "raid_level": "raid0", 00:21:41.695 "superblock": false, 00:21:41.695 "num_base_bdevs": 4, 00:21:41.695 "num_base_bdevs_discovered": 4, 00:21:41.695 "num_base_bdevs_operational": 4, 00:21:41.695 "base_bdevs_list": [ 00:21:41.695 { 00:21:41.695 "name": "BaseBdev1", 00:21:41.695 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:41.695 "is_configured": true, 00:21:41.695 "data_offset": 0, 00:21:41.695 "data_size": 65536 00:21:41.695 }, 00:21:41.695 { 00:21:41.695 "name": "BaseBdev2", 00:21:41.695 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:41.695 "is_configured": true, 00:21:41.695 "data_offset": 0, 00:21:41.695 "data_size": 65536 00:21:41.695 }, 00:21:41.695 { 00:21:41.695 "name": "BaseBdev3", 00:21:41.695 "uuid": "f247e337-21f6-4af6-a3eb-bd88462363d1", 00:21:41.695 "is_configured": true, 00:21:41.695 "data_offset": 0, 00:21:41.695 "data_size": 65536 00:21:41.695 }, 00:21:41.695 { 00:21:41.695 "name": "BaseBdev4", 00:21:41.695 "uuid": "5ae56397-80bd-4a00-8ff3-2e38ce86eced", 00:21:41.695 "is_configured": true, 00:21:41.695 "data_offset": 0, 00:21:41.695 "data_size": 65536 00:21:41.695 } 00:21:41.695 ] 00:21:41.695 } 00:21:41.695 } 00:21:41.695 }' 00:21:41.695 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:41.695 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:41.695 BaseBdev2 00:21:41.695 BaseBdev3 00:21:41.695 BaseBdev4' 00:21:41.695 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.695 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:41.695 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.953 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.953 "name": "BaseBdev1", 00:21:41.953 "aliases": [ 00:21:41.953 "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb" 00:21:41.953 ], 00:21:41.953 "product_name": "Malloc disk", 00:21:41.953 "block_size": 512, 00:21:41.953 "num_blocks": 65536, 00:21:41.953 "uuid": "3fc70ad7-f292-4eb5-ad8e-d6634a188eeb", 00:21:41.953 "assigned_rate_limits": { 00:21:41.953 "rw_ios_per_sec": 0, 00:21:41.953 "rw_mbytes_per_sec": 0, 00:21:41.953 "r_mbytes_per_sec": 0, 00:21:41.953 "w_mbytes_per_sec": 0 00:21:41.953 }, 00:21:41.953 "claimed": true, 00:21:41.953 "claim_type": "exclusive_write", 00:21:41.953 "zoned": false, 00:21:41.953 "supported_io_types": { 00:21:41.953 "read": true, 00:21:41.953 "write": true, 00:21:41.953 "unmap": true, 00:21:41.953 "flush": true, 00:21:41.953 "reset": true, 00:21:41.953 "nvme_admin": false, 00:21:41.953 "nvme_io": false, 00:21:41.953 "nvme_io_md": false, 00:21:41.953 "write_zeroes": true, 00:21:41.953 "zcopy": true, 00:21:41.953 "get_zone_info": false, 00:21:41.953 "zone_management": false, 00:21:41.953 "zone_append": false, 00:21:41.953 "compare": false, 00:21:41.953 "compare_and_write": false, 00:21:41.953 "abort": true, 00:21:41.953 "seek_hole": false, 00:21:41.953 "seek_data": false, 00:21:41.953 "copy": true, 00:21:41.953 "nvme_iov_md": false 00:21:41.953 }, 00:21:41.953 "memory_domains": [ 00:21:41.953 { 00:21:41.953 "dma_device_id": "system", 00:21:41.953 "dma_device_type": 1 00:21:41.953 }, 00:21:41.953 { 00:21:41.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.953 "dma_device_type": 2 00:21:41.953 } 00:21:41.953 ], 00:21:41.953 "driver_specific": {} 00:21:41.953 }' 00:21:41.953 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.953 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.211 16:38:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.211 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.469 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.469 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.469 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:42.469 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:42.469 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:42.469 "name": "BaseBdev2", 00:21:42.469 "aliases": [ 00:21:42.469 "37bb8e19-4885-46a4-b28b-78e8df66e698" 00:21:42.469 ], 00:21:42.469 "product_name": "Malloc disk", 00:21:42.469 "block_size": 512, 00:21:42.469 "num_blocks": 65536, 00:21:42.469 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:42.469 "assigned_rate_limits": { 00:21:42.469 "rw_ios_per_sec": 0, 00:21:42.469 "rw_mbytes_per_sec": 0, 00:21:42.469 "r_mbytes_per_sec": 0, 00:21:42.469 "w_mbytes_per_sec": 0 00:21:42.469 }, 00:21:42.469 "claimed": true, 00:21:42.469 "claim_type": "exclusive_write", 00:21:42.469 "zoned": false, 00:21:42.469 "supported_io_types": { 00:21:42.469 "read": true, 00:21:42.469 "write": true, 00:21:42.469 "unmap": true, 00:21:42.469 "flush": true, 00:21:42.469 "reset": true, 00:21:42.469 "nvme_admin": false, 00:21:42.469 "nvme_io": false, 00:21:42.469 "nvme_io_md": false, 00:21:42.469 "write_zeroes": true, 00:21:42.469 "zcopy": true, 00:21:42.469 "get_zone_info": false, 00:21:42.469 "zone_management": false, 00:21:42.469 "zone_append": false, 00:21:42.469 "compare": false, 00:21:42.469 "compare_and_write": false, 00:21:42.469 "abort": true, 00:21:42.469 "seek_hole": false, 00:21:42.469 "seek_data": false, 00:21:42.469 "copy": true, 00:21:42.469 "nvme_iov_md": false 00:21:42.469 }, 00:21:42.469 "memory_domains": [ 00:21:42.469 { 00:21:42.469 "dma_device_id": "system", 00:21:42.469 "dma_device_type": 1 00:21:42.469 }, 00:21:42.469 { 00:21:42.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.469 "dma_device_type": 2 00:21:42.469 } 00:21:42.469 ], 00:21:42.469 "driver_specific": {} 00:21:42.469 }' 00:21:42.469 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.727 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.986 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.986 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:42.986 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:42.986 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:42.986 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:42.986 "name": "BaseBdev3", 00:21:42.986 "aliases": [ 00:21:42.986 "f247e337-21f6-4af6-a3eb-bd88462363d1" 00:21:42.986 ], 00:21:42.986 "product_name": "Malloc disk", 00:21:42.986 "block_size": 512, 00:21:42.986 "num_blocks": 65536, 00:21:42.986 "uuid": "f247e337-21f6-4af6-a3eb-bd88462363d1", 00:21:42.986 "assigned_rate_limits": { 00:21:42.986 "rw_ios_per_sec": 0, 00:21:42.986 "rw_mbytes_per_sec": 0, 00:21:42.986 "r_mbytes_per_sec": 0, 00:21:42.986 "w_mbytes_per_sec": 0 00:21:42.986 }, 00:21:42.986 "claimed": true, 00:21:42.986 "claim_type": "exclusive_write", 00:21:42.986 "zoned": false, 00:21:42.986 "supported_io_types": { 00:21:42.986 "read": true, 00:21:42.986 "write": true, 00:21:42.986 "unmap": true, 00:21:42.986 "flush": true, 00:21:42.986 "reset": true, 00:21:42.986 "nvme_admin": false, 00:21:42.986 "nvme_io": false, 00:21:42.986 "nvme_io_md": false, 00:21:42.986 "write_zeroes": true, 00:21:42.986 "zcopy": true, 00:21:42.986 "get_zone_info": false, 00:21:42.986 "zone_management": false, 00:21:42.986 "zone_append": false, 00:21:42.986 "compare": false, 00:21:42.986 "compare_and_write": false, 00:21:42.986 "abort": true, 00:21:42.986 "seek_hole": false, 00:21:42.986 "seek_data": false, 00:21:42.986 "copy": true, 00:21:42.986 "nvme_iov_md": false 00:21:42.986 }, 00:21:42.986 "memory_domains": [ 00:21:42.986 { 00:21:42.986 "dma_device_id": "system", 00:21:42.986 "dma_device_type": 1 00:21:42.986 }, 00:21:42.986 { 00:21:42.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.986 "dma_device_type": 2 00:21:42.986 } 00:21:42.986 ], 00:21:42.986 "driver_specific": {} 00:21:42.986 }' 00:21:43.252 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.252 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.252 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.252 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.252 16:38:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.252 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.252 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.252 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.252 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:43.252 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.515 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:43.515 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:43.515 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:43.515 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:43.515 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:43.774 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:43.774 "name": "BaseBdev4", 00:21:43.774 "aliases": [ 00:21:43.774 "5ae56397-80bd-4a00-8ff3-2e38ce86eced" 00:21:43.774 ], 00:21:43.774 "product_name": "Malloc disk", 00:21:43.774 "block_size": 512, 00:21:43.774 "num_blocks": 65536, 00:21:43.775 "uuid": "5ae56397-80bd-4a00-8ff3-2e38ce86eced", 00:21:43.775 "assigned_rate_limits": { 00:21:43.775 "rw_ios_per_sec": 0, 00:21:43.775 "rw_mbytes_per_sec": 0, 00:21:43.775 "r_mbytes_per_sec": 0, 00:21:43.775 "w_mbytes_per_sec": 0 00:21:43.775 }, 00:21:43.775 "claimed": true, 00:21:43.775 "claim_type": "exclusive_write", 00:21:43.775 "zoned": false, 00:21:43.775 "supported_io_types": { 00:21:43.775 "read": true, 00:21:43.775 "write": true, 00:21:43.775 "unmap": true, 00:21:43.775 "flush": true, 00:21:43.775 "reset": true, 00:21:43.775 "nvme_admin": false, 00:21:43.775 "nvme_io": false, 00:21:43.775 "nvme_io_md": false, 00:21:43.775 "write_zeroes": true, 00:21:43.775 "zcopy": true, 00:21:43.775 "get_zone_info": false, 00:21:43.775 "zone_management": false, 00:21:43.775 "zone_append": false, 00:21:43.775 "compare": false, 00:21:43.775 "compare_and_write": false, 00:21:43.775 "abort": true, 00:21:43.775 "seek_hole": false, 00:21:43.775 "seek_data": false, 00:21:43.775 "copy": true, 00:21:43.775 "nvme_iov_md": false 00:21:43.775 }, 00:21:43.775 "memory_domains": [ 00:21:43.775 { 00:21:43.775 "dma_device_id": "system", 00:21:43.775 "dma_device_type": 1 00:21:43.775 }, 00:21:43.775 { 00:21:43.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:43.775 "dma_device_type": 2 00:21:43.775 } 00:21:43.775 ], 00:21:43.775 "driver_specific": {} 00:21:43.775 }' 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:43.775 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:44.034 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:44.034 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.034 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:44.034 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:44.034 16:38:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:44.293 [2024-07-24 16:38:40.967375] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:44.293 [2024-07-24 16:38:40.967412] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:44.293 [2024-07-24 16:38:40.967469] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.293 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.552 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.552 "name": "Existed_Raid", 00:21:44.552 "uuid": "376d4681-0c2a-46c9-9499-a52bf299f366", 00:21:44.552 "strip_size_kb": 64, 00:21:44.552 "state": "offline", 00:21:44.552 "raid_level": "raid0", 00:21:44.552 "superblock": false, 00:21:44.552 "num_base_bdevs": 4, 00:21:44.552 "num_base_bdevs_discovered": 3, 00:21:44.552 "num_base_bdevs_operational": 3, 00:21:44.552 "base_bdevs_list": [ 00:21:44.552 { 00:21:44.552 "name": null, 00:21:44.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.552 "is_configured": false, 00:21:44.552 "data_offset": 0, 00:21:44.552 "data_size": 65536 00:21:44.552 }, 00:21:44.552 { 00:21:44.552 "name": "BaseBdev2", 00:21:44.552 "uuid": "37bb8e19-4885-46a4-b28b-78e8df66e698", 00:21:44.552 "is_configured": true, 00:21:44.552 "data_offset": 0, 00:21:44.552 "data_size": 65536 00:21:44.552 }, 00:21:44.552 { 00:21:44.552 "name": "BaseBdev3", 00:21:44.552 "uuid": "f247e337-21f6-4af6-a3eb-bd88462363d1", 00:21:44.552 "is_configured": true, 00:21:44.552 "data_offset": 0, 00:21:44.552 "data_size": 65536 00:21:44.552 }, 00:21:44.552 { 00:21:44.552 "name": "BaseBdev4", 00:21:44.552 "uuid": "5ae56397-80bd-4a00-8ff3-2e38ce86eced", 00:21:44.552 "is_configured": true, 00:21:44.552 "data_offset": 0, 00:21:44.552 "data_size": 65536 00:21:44.552 } 00:21:44.552 ] 00:21:44.552 }' 00:21:44.552 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.552 16:38:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.119 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:45.119 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:45.119 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.119 16:38:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:45.378 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:45.378 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:45.378 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:45.637 [2024-07-24 16:38:42.245505] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:45.637 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:45.637 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:45.637 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.637 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:45.896 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:45.896 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:45.896 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:46.156 [2024-07-24 16:38:42.837985] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:46.156 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:46.156 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:46.156 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.156 16:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:46.415 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:46.415 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:46.415 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:46.675 [2024-07-24 16:38:43.430400] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:46.675 [2024-07-24 16:38:43.430456] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:21:46.936 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:46.936 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:46.936 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.936 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:47.195 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:47.195 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:47.195 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:47.195 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:47.195 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:47.195 16:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:47.453 BaseBdev2 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:47.453 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.713 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:47.713 [ 00:21:47.713 { 00:21:47.713 "name": "BaseBdev2", 00:21:47.713 "aliases": [ 00:21:47.713 "d1884627-7ed0-44fb-bc35-39ad0f716ae4" 00:21:47.713 ], 00:21:47.713 "product_name": "Malloc disk", 00:21:47.713 "block_size": 512, 00:21:47.713 "num_blocks": 65536, 00:21:47.713 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:47.713 "assigned_rate_limits": { 00:21:47.713 "rw_ios_per_sec": 0, 00:21:47.713 "rw_mbytes_per_sec": 0, 00:21:47.713 "r_mbytes_per_sec": 0, 00:21:47.713 "w_mbytes_per_sec": 0 00:21:47.713 }, 00:21:47.713 "claimed": false, 00:21:47.713 "zoned": false, 00:21:47.713 "supported_io_types": { 00:21:47.713 "read": true, 00:21:47.713 "write": true, 00:21:47.713 "unmap": true, 00:21:47.713 "flush": true, 00:21:47.713 "reset": true, 00:21:47.713 "nvme_admin": false, 00:21:47.713 "nvme_io": false, 00:21:47.713 "nvme_io_md": false, 00:21:47.713 "write_zeroes": true, 00:21:47.713 "zcopy": true, 00:21:47.713 "get_zone_info": false, 00:21:47.713 "zone_management": false, 00:21:47.713 "zone_append": false, 00:21:47.713 "compare": false, 00:21:47.713 "compare_and_write": false, 00:21:47.713 "abort": true, 00:21:47.713 "seek_hole": false, 00:21:47.713 "seek_data": false, 00:21:47.713 "copy": true, 00:21:47.713 "nvme_iov_md": false 00:21:47.713 }, 00:21:47.713 "memory_domains": [ 00:21:47.713 { 00:21:47.713 "dma_device_id": "system", 00:21:47.713 "dma_device_type": 1 00:21:47.713 }, 00:21:47.713 { 00:21:47.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.713 "dma_device_type": 2 00:21:47.713 } 00:21:47.713 ], 00:21:47.713 "driver_specific": {} 00:21:47.713 } 00:21:47.713 ] 00:21:47.713 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:47.713 16:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:47.713 16:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:47.713 16:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:47.974 BaseBdev3 00:21:47.974 16:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:47.974 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:47.974 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:47.974 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:47.974 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:48.233 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:48.233 16:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.233 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:48.492 [ 00:21:48.492 { 00:21:48.492 "name": "BaseBdev3", 00:21:48.492 "aliases": [ 00:21:48.492 "be97513f-cafa-4c66-b42b-ace235da35e6" 00:21:48.492 ], 00:21:48.492 "product_name": "Malloc disk", 00:21:48.492 "block_size": 512, 00:21:48.492 "num_blocks": 65536, 00:21:48.492 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:48.493 "assigned_rate_limits": { 00:21:48.493 "rw_ios_per_sec": 0, 00:21:48.493 "rw_mbytes_per_sec": 0, 00:21:48.493 "r_mbytes_per_sec": 0, 00:21:48.493 "w_mbytes_per_sec": 0 00:21:48.493 }, 00:21:48.493 "claimed": false, 00:21:48.493 "zoned": false, 00:21:48.493 "supported_io_types": { 00:21:48.493 "read": true, 00:21:48.493 "write": true, 00:21:48.493 "unmap": true, 00:21:48.493 "flush": true, 00:21:48.493 "reset": true, 00:21:48.493 "nvme_admin": false, 00:21:48.493 "nvme_io": false, 00:21:48.493 "nvme_io_md": false, 00:21:48.493 "write_zeroes": true, 00:21:48.493 "zcopy": true, 00:21:48.493 "get_zone_info": false, 00:21:48.493 "zone_management": false, 00:21:48.493 "zone_append": false, 00:21:48.493 "compare": false, 00:21:48.493 "compare_and_write": false, 00:21:48.493 "abort": true, 00:21:48.493 "seek_hole": false, 00:21:48.493 "seek_data": false, 00:21:48.493 "copy": true, 00:21:48.493 "nvme_iov_md": false 00:21:48.493 }, 00:21:48.493 "memory_domains": [ 00:21:48.493 { 00:21:48.493 "dma_device_id": "system", 00:21:48.493 "dma_device_type": 1 00:21:48.493 }, 00:21:48.493 { 00:21:48.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.493 "dma_device_type": 2 00:21:48.493 } 00:21:48.493 ], 00:21:48.493 "driver_specific": {} 00:21:48.493 } 00:21:48.493 ] 00:21:48.493 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:48.493 16:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:48.493 16:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:48.493 16:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:48.752 BaseBdev4 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:48.752 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.011 16:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:49.271 [ 00:21:49.271 { 00:21:49.271 "name": "BaseBdev4", 00:21:49.271 "aliases": [ 00:21:49.271 "78c588fe-bc08-4036-b759-6c82525c4b69" 00:21:49.271 ], 00:21:49.271 "product_name": "Malloc disk", 00:21:49.271 "block_size": 512, 00:21:49.271 "num_blocks": 65536, 00:21:49.271 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:49.271 "assigned_rate_limits": { 00:21:49.271 "rw_ios_per_sec": 0, 00:21:49.271 "rw_mbytes_per_sec": 0, 00:21:49.271 "r_mbytes_per_sec": 0, 00:21:49.271 "w_mbytes_per_sec": 0 00:21:49.271 }, 00:21:49.271 "claimed": false, 00:21:49.271 "zoned": false, 00:21:49.271 "supported_io_types": { 00:21:49.271 "read": true, 00:21:49.271 "write": true, 00:21:49.271 "unmap": true, 00:21:49.271 "flush": true, 00:21:49.271 "reset": true, 00:21:49.271 "nvme_admin": false, 00:21:49.271 "nvme_io": false, 00:21:49.271 "nvme_io_md": false, 00:21:49.271 "write_zeroes": true, 00:21:49.271 "zcopy": true, 00:21:49.271 "get_zone_info": false, 00:21:49.271 "zone_management": false, 00:21:49.271 "zone_append": false, 00:21:49.271 "compare": false, 00:21:49.271 "compare_and_write": false, 00:21:49.271 "abort": true, 00:21:49.271 "seek_hole": false, 00:21:49.271 "seek_data": false, 00:21:49.271 "copy": true, 00:21:49.271 "nvme_iov_md": false 00:21:49.271 }, 00:21:49.271 "memory_domains": [ 00:21:49.271 { 00:21:49.271 "dma_device_id": "system", 00:21:49.271 "dma_device_type": 1 00:21:49.271 }, 00:21:49.271 { 00:21:49.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.271 "dma_device_type": 2 00:21:49.271 } 00:21:49.271 ], 00:21:49.271 "driver_specific": {} 00:21:49.271 } 00:21:49.271 ] 00:21:49.271 16:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:49.271 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:49.271 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:49.271 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:49.530 [2024-07-24 16:38:46.220754] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:49.530 [2024-07-24 16:38:46.220803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:49.530 [2024-07-24 16:38:46.220836] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:49.530 [2024-07-24 16:38:46.223134] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:49.530 [2024-07-24 16:38:46.223203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.530 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.531 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.531 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.531 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.790 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.790 "name": "Existed_Raid", 00:21:49.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.790 "strip_size_kb": 64, 00:21:49.790 "state": "configuring", 00:21:49.790 "raid_level": "raid0", 00:21:49.790 "superblock": false, 00:21:49.790 "num_base_bdevs": 4, 00:21:49.790 "num_base_bdevs_discovered": 3, 00:21:49.790 "num_base_bdevs_operational": 4, 00:21:49.790 "base_bdevs_list": [ 00:21:49.790 { 00:21:49.790 "name": "BaseBdev1", 00:21:49.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.790 "is_configured": false, 00:21:49.790 "data_offset": 0, 00:21:49.790 "data_size": 0 00:21:49.790 }, 00:21:49.790 { 00:21:49.790 "name": "BaseBdev2", 00:21:49.790 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:49.790 "is_configured": true, 00:21:49.790 "data_offset": 0, 00:21:49.790 "data_size": 65536 00:21:49.790 }, 00:21:49.790 { 00:21:49.790 "name": "BaseBdev3", 00:21:49.790 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:49.790 "is_configured": true, 00:21:49.790 "data_offset": 0, 00:21:49.790 "data_size": 65536 00:21:49.790 }, 00:21:49.790 { 00:21:49.790 "name": "BaseBdev4", 00:21:49.790 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:49.790 "is_configured": true, 00:21:49.790 "data_offset": 0, 00:21:49.790 "data_size": 65536 00:21:49.790 } 00:21:49.790 ] 00:21:49.790 }' 00:21:49.790 16:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.790 16:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.358 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:50.617 [2024-07-24 16:38:47.247482] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.617 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.875 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.875 "name": "Existed_Raid", 00:21:50.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.875 "strip_size_kb": 64, 00:21:50.875 "state": "configuring", 00:21:50.875 "raid_level": "raid0", 00:21:50.875 "superblock": false, 00:21:50.875 "num_base_bdevs": 4, 00:21:50.875 "num_base_bdevs_discovered": 2, 00:21:50.875 "num_base_bdevs_operational": 4, 00:21:50.875 "base_bdevs_list": [ 00:21:50.875 { 00:21:50.875 "name": "BaseBdev1", 00:21:50.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.875 "is_configured": false, 00:21:50.875 "data_offset": 0, 00:21:50.875 "data_size": 0 00:21:50.875 }, 00:21:50.875 { 00:21:50.875 "name": null, 00:21:50.875 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:50.875 "is_configured": false, 00:21:50.875 "data_offset": 0, 00:21:50.875 "data_size": 65536 00:21:50.875 }, 00:21:50.875 { 00:21:50.875 "name": "BaseBdev3", 00:21:50.875 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:50.875 "is_configured": true, 00:21:50.875 "data_offset": 0, 00:21:50.875 "data_size": 65536 00:21:50.875 }, 00:21:50.875 { 00:21:50.875 "name": "BaseBdev4", 00:21:50.875 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:50.875 "is_configured": true, 00:21:50.875 "data_offset": 0, 00:21:50.875 "data_size": 65536 00:21:50.875 } 00:21:50.875 ] 00:21:50.875 }' 00:21:50.875 16:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.875 16:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.442 16:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.442 16:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:51.442 16:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:51.442 16:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:51.701 [2024-07-24 16:38:48.551212] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:51.701 BaseBdev1 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:51.960 16:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:52.219 [ 00:21:52.219 { 00:21:52.219 "name": "BaseBdev1", 00:21:52.219 "aliases": [ 00:21:52.219 "e60ff520-ce27-479d-88d2-d29e10dfad74" 00:21:52.219 ], 00:21:52.219 "product_name": "Malloc disk", 00:21:52.219 "block_size": 512, 00:21:52.219 "num_blocks": 65536, 00:21:52.219 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:52.219 "assigned_rate_limits": { 00:21:52.219 "rw_ios_per_sec": 0, 00:21:52.219 "rw_mbytes_per_sec": 0, 00:21:52.219 "r_mbytes_per_sec": 0, 00:21:52.220 "w_mbytes_per_sec": 0 00:21:52.220 }, 00:21:52.220 "claimed": true, 00:21:52.220 "claim_type": "exclusive_write", 00:21:52.220 "zoned": false, 00:21:52.220 "supported_io_types": { 00:21:52.220 "read": true, 00:21:52.220 "write": true, 00:21:52.220 "unmap": true, 00:21:52.220 "flush": true, 00:21:52.220 "reset": true, 00:21:52.220 "nvme_admin": false, 00:21:52.220 "nvme_io": false, 00:21:52.220 "nvme_io_md": false, 00:21:52.220 "write_zeroes": true, 00:21:52.220 "zcopy": true, 00:21:52.220 "get_zone_info": false, 00:21:52.220 "zone_management": false, 00:21:52.220 "zone_append": false, 00:21:52.220 "compare": false, 00:21:52.220 "compare_and_write": false, 00:21:52.220 "abort": true, 00:21:52.220 "seek_hole": false, 00:21:52.220 "seek_data": false, 00:21:52.220 "copy": true, 00:21:52.220 "nvme_iov_md": false 00:21:52.220 }, 00:21:52.220 "memory_domains": [ 00:21:52.220 { 00:21:52.220 "dma_device_id": "system", 00:21:52.220 "dma_device_type": 1 00:21:52.220 }, 00:21:52.220 { 00:21:52.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.220 "dma_device_type": 2 00:21:52.220 } 00:21:52.220 ], 00:21:52.220 "driver_specific": {} 00:21:52.220 } 00:21:52.220 ] 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.220 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.479 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:52.479 "name": "Existed_Raid", 00:21:52.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.479 "strip_size_kb": 64, 00:21:52.479 "state": "configuring", 00:21:52.479 "raid_level": "raid0", 00:21:52.479 "superblock": false, 00:21:52.479 "num_base_bdevs": 4, 00:21:52.479 "num_base_bdevs_discovered": 3, 00:21:52.479 "num_base_bdevs_operational": 4, 00:21:52.479 "base_bdevs_list": [ 00:21:52.479 { 00:21:52.479 "name": "BaseBdev1", 00:21:52.479 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:52.479 "is_configured": true, 00:21:52.479 "data_offset": 0, 00:21:52.479 "data_size": 65536 00:21:52.479 }, 00:21:52.479 { 00:21:52.479 "name": null, 00:21:52.479 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:52.479 "is_configured": false, 00:21:52.479 "data_offset": 0, 00:21:52.479 "data_size": 65536 00:21:52.479 }, 00:21:52.479 { 00:21:52.479 "name": "BaseBdev3", 00:21:52.479 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:52.479 "is_configured": true, 00:21:52.479 "data_offset": 0, 00:21:52.479 "data_size": 65536 00:21:52.479 }, 00:21:52.479 { 00:21:52.479 "name": "BaseBdev4", 00:21:52.479 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:52.479 "is_configured": true, 00:21:52.479 "data_offset": 0, 00:21:52.479 "data_size": 65536 00:21:52.479 } 00:21:52.479 ] 00:21:52.479 }' 00:21:52.479 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:52.479 16:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.050 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.050 16:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:53.308 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:53.308 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:53.567 [2024-07-24 16:38:50.247899] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.567 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.826 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.826 "name": "Existed_Raid", 00:21:53.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.826 "strip_size_kb": 64, 00:21:53.826 "state": "configuring", 00:21:53.826 "raid_level": "raid0", 00:21:53.826 "superblock": false, 00:21:53.826 "num_base_bdevs": 4, 00:21:53.826 "num_base_bdevs_discovered": 2, 00:21:53.826 "num_base_bdevs_operational": 4, 00:21:53.826 "base_bdevs_list": [ 00:21:53.826 { 00:21:53.826 "name": "BaseBdev1", 00:21:53.826 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:53.826 "is_configured": true, 00:21:53.826 "data_offset": 0, 00:21:53.826 "data_size": 65536 00:21:53.826 }, 00:21:53.826 { 00:21:53.826 "name": null, 00:21:53.826 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:53.826 "is_configured": false, 00:21:53.826 "data_offset": 0, 00:21:53.826 "data_size": 65536 00:21:53.826 }, 00:21:53.826 { 00:21:53.826 "name": null, 00:21:53.826 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:53.826 "is_configured": false, 00:21:53.826 "data_offset": 0, 00:21:53.826 "data_size": 65536 00:21:53.826 }, 00:21:53.826 { 00:21:53.826 "name": "BaseBdev4", 00:21:53.826 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:53.826 "is_configured": true, 00:21:53.826 "data_offset": 0, 00:21:53.826 "data_size": 65536 00:21:53.826 } 00:21:53.826 ] 00:21:53.826 }' 00:21:53.826 16:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.826 16:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.393 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:54.393 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.393 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:54.393 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:54.652 [2024-07-24 16:38:51.439130] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.652 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.911 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.911 "name": "Existed_Raid", 00:21:54.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.911 "strip_size_kb": 64, 00:21:54.911 "state": "configuring", 00:21:54.911 "raid_level": "raid0", 00:21:54.911 "superblock": false, 00:21:54.911 "num_base_bdevs": 4, 00:21:54.911 "num_base_bdevs_discovered": 3, 00:21:54.911 "num_base_bdevs_operational": 4, 00:21:54.911 "base_bdevs_list": [ 00:21:54.911 { 00:21:54.911 "name": "BaseBdev1", 00:21:54.911 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:54.911 "is_configured": true, 00:21:54.911 "data_offset": 0, 00:21:54.911 "data_size": 65536 00:21:54.911 }, 00:21:54.911 { 00:21:54.911 "name": null, 00:21:54.911 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:54.911 "is_configured": false, 00:21:54.911 "data_offset": 0, 00:21:54.911 "data_size": 65536 00:21:54.911 }, 00:21:54.911 { 00:21:54.911 "name": "BaseBdev3", 00:21:54.911 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:54.911 "is_configured": true, 00:21:54.911 "data_offset": 0, 00:21:54.911 "data_size": 65536 00:21:54.911 }, 00:21:54.911 { 00:21:54.911 "name": "BaseBdev4", 00:21:54.911 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:54.911 "is_configured": true, 00:21:54.911 "data_offset": 0, 00:21:54.911 "data_size": 65536 00:21:54.911 } 00:21:54.911 ] 00:21:54.911 }' 00:21:54.911 16:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.911 16:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.547 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.547 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:55.805 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:55.805 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:56.065 [2024-07-24 16:38:52.702601] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.065 16:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.324 16:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.324 "name": "Existed_Raid", 00:21:56.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.324 "strip_size_kb": 64, 00:21:56.324 "state": "configuring", 00:21:56.324 "raid_level": "raid0", 00:21:56.324 "superblock": false, 00:21:56.324 "num_base_bdevs": 4, 00:21:56.324 "num_base_bdevs_discovered": 2, 00:21:56.324 "num_base_bdevs_operational": 4, 00:21:56.324 "base_bdevs_list": [ 00:21:56.324 { 00:21:56.324 "name": null, 00:21:56.324 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:56.324 "is_configured": false, 00:21:56.324 "data_offset": 0, 00:21:56.324 "data_size": 65536 00:21:56.324 }, 00:21:56.324 { 00:21:56.324 "name": null, 00:21:56.324 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:56.324 "is_configured": false, 00:21:56.324 "data_offset": 0, 00:21:56.324 "data_size": 65536 00:21:56.324 }, 00:21:56.324 { 00:21:56.324 "name": "BaseBdev3", 00:21:56.324 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:56.324 "is_configured": true, 00:21:56.324 "data_offset": 0, 00:21:56.324 "data_size": 65536 00:21:56.324 }, 00:21:56.324 { 00:21:56.324 "name": "BaseBdev4", 00:21:56.324 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:56.324 "is_configured": true, 00:21:56.324 "data_offset": 0, 00:21:56.324 "data_size": 65536 00:21:56.324 } 00:21:56.324 ] 00:21:56.324 }' 00:21:56.324 16:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.324 16:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 16:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.891 16:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:57.151 16:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:57.151 16:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:57.411 [2024-07-24 16:38:54.099928] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.411 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.669 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.669 "name": "Existed_Raid", 00:21:57.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.669 "strip_size_kb": 64, 00:21:57.669 "state": "configuring", 00:21:57.669 "raid_level": "raid0", 00:21:57.669 "superblock": false, 00:21:57.669 "num_base_bdevs": 4, 00:21:57.669 "num_base_bdevs_discovered": 3, 00:21:57.669 "num_base_bdevs_operational": 4, 00:21:57.669 "base_bdevs_list": [ 00:21:57.669 { 00:21:57.669 "name": null, 00:21:57.669 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:57.669 "is_configured": false, 00:21:57.669 "data_offset": 0, 00:21:57.669 "data_size": 65536 00:21:57.669 }, 00:21:57.669 { 00:21:57.669 "name": "BaseBdev2", 00:21:57.669 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:57.669 "is_configured": true, 00:21:57.669 "data_offset": 0, 00:21:57.669 "data_size": 65536 00:21:57.669 }, 00:21:57.669 { 00:21:57.669 "name": "BaseBdev3", 00:21:57.669 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:57.669 "is_configured": true, 00:21:57.669 "data_offset": 0, 00:21:57.669 "data_size": 65536 00:21:57.669 }, 00:21:57.669 { 00:21:57.669 "name": "BaseBdev4", 00:21:57.669 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:57.669 "is_configured": true, 00:21:57.669 "data_offset": 0, 00:21:57.669 "data_size": 65536 00:21:57.669 } 00:21:57.669 ] 00:21:57.669 }' 00:21:57.669 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.669 16:38:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.235 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.235 16:38:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:58.494 16:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:58.494 16:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.494 16:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:58.494 16:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e60ff520-ce27-479d-88d2-d29e10dfad74 00:21:58.753 [2024-07-24 16:38:55.613453] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:58.753 [2024-07-24 16:38:55.613498] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:21:58.753 [2024-07-24 16:38:55.613510] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:58.753 [2024-07-24 16:38:55.613831] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:21:58.753 [2024-07-24 16:38:55.614033] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:21:58.753 [2024-07-24 16:38:55.614051] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:21:59.011 [2024-07-24 16:38:55.614361] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.011 NewBaseBdev 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.011 16:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:59.270 [ 00:21:59.270 { 00:21:59.270 "name": "NewBaseBdev", 00:21:59.270 "aliases": [ 00:21:59.270 "e60ff520-ce27-479d-88d2-d29e10dfad74" 00:21:59.270 ], 00:21:59.270 "product_name": "Malloc disk", 00:21:59.270 "block_size": 512, 00:21:59.270 "num_blocks": 65536, 00:21:59.270 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:59.270 "assigned_rate_limits": { 00:21:59.270 "rw_ios_per_sec": 0, 00:21:59.270 "rw_mbytes_per_sec": 0, 00:21:59.270 "r_mbytes_per_sec": 0, 00:21:59.270 "w_mbytes_per_sec": 0 00:21:59.270 }, 00:21:59.270 "claimed": true, 00:21:59.270 "claim_type": "exclusive_write", 00:21:59.270 "zoned": false, 00:21:59.270 "supported_io_types": { 00:21:59.270 "read": true, 00:21:59.270 "write": true, 00:21:59.270 "unmap": true, 00:21:59.270 "flush": true, 00:21:59.270 "reset": true, 00:21:59.270 "nvme_admin": false, 00:21:59.270 "nvme_io": false, 00:21:59.270 "nvme_io_md": false, 00:21:59.270 "write_zeroes": true, 00:21:59.270 "zcopy": true, 00:21:59.270 "get_zone_info": false, 00:21:59.270 "zone_management": false, 00:21:59.270 "zone_append": false, 00:21:59.270 "compare": false, 00:21:59.270 "compare_and_write": false, 00:21:59.270 "abort": true, 00:21:59.270 "seek_hole": false, 00:21:59.270 "seek_data": false, 00:21:59.270 "copy": true, 00:21:59.270 "nvme_iov_md": false 00:21:59.270 }, 00:21:59.270 "memory_domains": [ 00:21:59.270 { 00:21:59.270 "dma_device_id": "system", 00:21:59.270 "dma_device_type": 1 00:21:59.270 }, 00:21:59.270 { 00:21:59.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.270 "dma_device_type": 2 00:21:59.270 } 00:21:59.270 ], 00:21:59.270 "driver_specific": {} 00:21:59.270 } 00:21:59.270 ] 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.270 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.271 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.530 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.530 "name": "Existed_Raid", 00:21:59.530 "uuid": "fc1e2b0f-a319-4bbc-8b61-62ee42074ecf", 00:21:59.530 "strip_size_kb": 64, 00:21:59.530 "state": "online", 00:21:59.530 "raid_level": "raid0", 00:21:59.530 "superblock": false, 00:21:59.530 "num_base_bdevs": 4, 00:21:59.530 "num_base_bdevs_discovered": 4, 00:21:59.530 "num_base_bdevs_operational": 4, 00:21:59.530 "base_bdevs_list": [ 00:21:59.530 { 00:21:59.530 "name": "NewBaseBdev", 00:21:59.530 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:21:59.530 "is_configured": true, 00:21:59.530 "data_offset": 0, 00:21:59.530 "data_size": 65536 00:21:59.530 }, 00:21:59.530 { 00:21:59.530 "name": "BaseBdev2", 00:21:59.530 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:21:59.530 "is_configured": true, 00:21:59.530 "data_offset": 0, 00:21:59.530 "data_size": 65536 00:21:59.530 }, 00:21:59.530 { 00:21:59.530 "name": "BaseBdev3", 00:21:59.530 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:21:59.530 "is_configured": true, 00:21:59.530 "data_offset": 0, 00:21:59.530 "data_size": 65536 00:21:59.530 }, 00:21:59.530 { 00:21:59.530 "name": "BaseBdev4", 00:21:59.530 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:21:59.530 "is_configured": true, 00:21:59.530 "data_offset": 0, 00:21:59.530 "data_size": 65536 00:21:59.530 } 00:21:59.530 ] 00:21:59.530 }' 00:21:59.530 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.530 16:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:00.097 16:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:00.356 [2024-07-24 16:38:57.114004] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:00.356 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:00.356 "name": "Existed_Raid", 00:22:00.356 "aliases": [ 00:22:00.356 "fc1e2b0f-a319-4bbc-8b61-62ee42074ecf" 00:22:00.356 ], 00:22:00.356 "product_name": "Raid Volume", 00:22:00.356 "block_size": 512, 00:22:00.356 "num_blocks": 262144, 00:22:00.356 "uuid": "fc1e2b0f-a319-4bbc-8b61-62ee42074ecf", 00:22:00.356 "assigned_rate_limits": { 00:22:00.356 "rw_ios_per_sec": 0, 00:22:00.356 "rw_mbytes_per_sec": 0, 00:22:00.356 "r_mbytes_per_sec": 0, 00:22:00.356 "w_mbytes_per_sec": 0 00:22:00.356 }, 00:22:00.356 "claimed": false, 00:22:00.356 "zoned": false, 00:22:00.356 "supported_io_types": { 00:22:00.356 "read": true, 00:22:00.356 "write": true, 00:22:00.356 "unmap": true, 00:22:00.356 "flush": true, 00:22:00.356 "reset": true, 00:22:00.356 "nvme_admin": false, 00:22:00.356 "nvme_io": false, 00:22:00.356 "nvme_io_md": false, 00:22:00.356 "write_zeroes": true, 00:22:00.356 "zcopy": false, 00:22:00.356 "get_zone_info": false, 00:22:00.356 "zone_management": false, 00:22:00.356 "zone_append": false, 00:22:00.356 "compare": false, 00:22:00.356 "compare_and_write": false, 00:22:00.356 "abort": false, 00:22:00.357 "seek_hole": false, 00:22:00.357 "seek_data": false, 00:22:00.357 "copy": false, 00:22:00.357 "nvme_iov_md": false 00:22:00.357 }, 00:22:00.357 "memory_domains": [ 00:22:00.357 { 00:22:00.357 "dma_device_id": "system", 00:22:00.357 "dma_device_type": 1 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.357 "dma_device_type": 2 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "system", 00:22:00.357 "dma_device_type": 1 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.357 "dma_device_type": 2 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "system", 00:22:00.357 "dma_device_type": 1 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.357 "dma_device_type": 2 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "system", 00:22:00.357 "dma_device_type": 1 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.357 "dma_device_type": 2 00:22:00.357 } 00:22:00.357 ], 00:22:00.357 "driver_specific": { 00:22:00.357 "raid": { 00:22:00.357 "uuid": "fc1e2b0f-a319-4bbc-8b61-62ee42074ecf", 00:22:00.357 "strip_size_kb": 64, 00:22:00.357 "state": "online", 00:22:00.357 "raid_level": "raid0", 00:22:00.357 "superblock": false, 00:22:00.357 "num_base_bdevs": 4, 00:22:00.357 "num_base_bdevs_discovered": 4, 00:22:00.357 "num_base_bdevs_operational": 4, 00:22:00.357 "base_bdevs_list": [ 00:22:00.357 { 00:22:00.357 "name": "NewBaseBdev", 00:22:00.357 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:22:00.357 "is_configured": true, 00:22:00.357 "data_offset": 0, 00:22:00.357 "data_size": 65536 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "name": "BaseBdev2", 00:22:00.357 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:22:00.357 "is_configured": true, 00:22:00.357 "data_offset": 0, 00:22:00.357 "data_size": 65536 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "name": "BaseBdev3", 00:22:00.357 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:22:00.357 "is_configured": true, 00:22:00.357 "data_offset": 0, 00:22:00.357 "data_size": 65536 00:22:00.357 }, 00:22:00.357 { 00:22:00.357 "name": "BaseBdev4", 00:22:00.357 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:22:00.357 "is_configured": true, 00:22:00.357 "data_offset": 0, 00:22:00.357 "data_size": 65536 00:22:00.357 } 00:22:00.357 ] 00:22:00.357 } 00:22:00.357 } 00:22:00.357 }' 00:22:00.357 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:00.357 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:00.357 BaseBdev2 00:22:00.357 BaseBdev3 00:22:00.357 BaseBdev4' 00:22:00.357 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:00.357 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:00.357 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:00.616 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:00.616 "name": "NewBaseBdev", 00:22:00.616 "aliases": [ 00:22:00.616 "e60ff520-ce27-479d-88d2-d29e10dfad74" 00:22:00.616 ], 00:22:00.616 "product_name": "Malloc disk", 00:22:00.616 "block_size": 512, 00:22:00.616 "num_blocks": 65536, 00:22:00.616 "uuid": "e60ff520-ce27-479d-88d2-d29e10dfad74", 00:22:00.616 "assigned_rate_limits": { 00:22:00.616 "rw_ios_per_sec": 0, 00:22:00.616 "rw_mbytes_per_sec": 0, 00:22:00.616 "r_mbytes_per_sec": 0, 00:22:00.616 "w_mbytes_per_sec": 0 00:22:00.616 }, 00:22:00.616 "claimed": true, 00:22:00.616 "claim_type": "exclusive_write", 00:22:00.616 "zoned": false, 00:22:00.616 "supported_io_types": { 00:22:00.616 "read": true, 00:22:00.616 "write": true, 00:22:00.616 "unmap": true, 00:22:00.616 "flush": true, 00:22:00.616 "reset": true, 00:22:00.616 "nvme_admin": false, 00:22:00.616 "nvme_io": false, 00:22:00.616 "nvme_io_md": false, 00:22:00.616 "write_zeroes": true, 00:22:00.616 "zcopy": true, 00:22:00.616 "get_zone_info": false, 00:22:00.616 "zone_management": false, 00:22:00.616 "zone_append": false, 00:22:00.616 "compare": false, 00:22:00.616 "compare_and_write": false, 00:22:00.616 "abort": true, 00:22:00.616 "seek_hole": false, 00:22:00.616 "seek_data": false, 00:22:00.616 "copy": true, 00:22:00.616 "nvme_iov_md": false 00:22:00.616 }, 00:22:00.616 "memory_domains": [ 00:22:00.616 { 00:22:00.616 "dma_device_id": "system", 00:22:00.616 "dma_device_type": 1 00:22:00.616 }, 00:22:00.616 { 00:22:00.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.616 "dma_device_type": 2 00:22:00.616 } 00:22:00.616 ], 00:22:00.616 "driver_specific": {} 00:22:00.616 }' 00:22:00.616 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.616 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:00.875 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:01.134 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:01.134 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:01.134 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:01.134 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:01.134 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:01.134 "name": "BaseBdev2", 00:22:01.134 "aliases": [ 00:22:01.134 "d1884627-7ed0-44fb-bc35-39ad0f716ae4" 00:22:01.134 ], 00:22:01.134 "product_name": "Malloc disk", 00:22:01.134 "block_size": 512, 00:22:01.134 "num_blocks": 65536, 00:22:01.134 "uuid": "d1884627-7ed0-44fb-bc35-39ad0f716ae4", 00:22:01.134 "assigned_rate_limits": { 00:22:01.134 "rw_ios_per_sec": 0, 00:22:01.134 "rw_mbytes_per_sec": 0, 00:22:01.134 "r_mbytes_per_sec": 0, 00:22:01.134 "w_mbytes_per_sec": 0 00:22:01.134 }, 00:22:01.134 "claimed": true, 00:22:01.134 "claim_type": "exclusive_write", 00:22:01.134 "zoned": false, 00:22:01.134 "supported_io_types": { 00:22:01.134 "read": true, 00:22:01.134 "write": true, 00:22:01.134 "unmap": true, 00:22:01.134 "flush": true, 00:22:01.134 "reset": true, 00:22:01.134 "nvme_admin": false, 00:22:01.134 "nvme_io": false, 00:22:01.134 "nvme_io_md": false, 00:22:01.134 "write_zeroes": true, 00:22:01.134 "zcopy": true, 00:22:01.134 "get_zone_info": false, 00:22:01.134 "zone_management": false, 00:22:01.134 "zone_append": false, 00:22:01.134 "compare": false, 00:22:01.134 "compare_and_write": false, 00:22:01.134 "abort": true, 00:22:01.134 "seek_hole": false, 00:22:01.134 "seek_data": false, 00:22:01.134 "copy": true, 00:22:01.134 "nvme_iov_md": false 00:22:01.134 }, 00:22:01.135 "memory_domains": [ 00:22:01.135 { 00:22:01.135 "dma_device_id": "system", 00:22:01.135 "dma_device_type": 1 00:22:01.135 }, 00:22:01.135 { 00:22:01.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.135 "dma_device_type": 2 00:22:01.135 } 00:22:01.135 ], 00:22:01.135 "driver_specific": {} 00:22:01.135 }' 00:22:01.135 16:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:01.393 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:01.651 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:01.651 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:01.651 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:01.651 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:01.651 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:01.910 "name": "BaseBdev3", 00:22:01.910 "aliases": [ 00:22:01.910 "be97513f-cafa-4c66-b42b-ace235da35e6" 00:22:01.910 ], 00:22:01.910 "product_name": "Malloc disk", 00:22:01.910 "block_size": 512, 00:22:01.910 "num_blocks": 65536, 00:22:01.910 "uuid": "be97513f-cafa-4c66-b42b-ace235da35e6", 00:22:01.910 "assigned_rate_limits": { 00:22:01.910 "rw_ios_per_sec": 0, 00:22:01.910 "rw_mbytes_per_sec": 0, 00:22:01.910 "r_mbytes_per_sec": 0, 00:22:01.910 "w_mbytes_per_sec": 0 00:22:01.910 }, 00:22:01.910 "claimed": true, 00:22:01.910 "claim_type": "exclusive_write", 00:22:01.910 "zoned": false, 00:22:01.910 "supported_io_types": { 00:22:01.910 "read": true, 00:22:01.910 "write": true, 00:22:01.910 "unmap": true, 00:22:01.910 "flush": true, 00:22:01.910 "reset": true, 00:22:01.910 "nvme_admin": false, 00:22:01.910 "nvme_io": false, 00:22:01.910 "nvme_io_md": false, 00:22:01.910 "write_zeroes": true, 00:22:01.910 "zcopy": true, 00:22:01.910 "get_zone_info": false, 00:22:01.910 "zone_management": false, 00:22:01.910 "zone_append": false, 00:22:01.910 "compare": false, 00:22:01.910 "compare_and_write": false, 00:22:01.910 "abort": true, 00:22:01.910 "seek_hole": false, 00:22:01.910 "seek_data": false, 00:22:01.910 "copy": true, 00:22:01.910 "nvme_iov_md": false 00:22:01.910 }, 00:22:01.910 "memory_domains": [ 00:22:01.910 { 00:22:01.910 "dma_device_id": "system", 00:22:01.910 "dma_device_type": 1 00:22:01.910 }, 00:22:01.910 { 00:22:01.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.910 "dma_device_type": 2 00:22:01.910 } 00:22:01.910 ], 00:22:01.910 "driver_specific": {} 00:22:01.910 }' 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:01.910 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:02.169 16:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:02.427 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:02.427 "name": "BaseBdev4", 00:22:02.427 "aliases": [ 00:22:02.427 "78c588fe-bc08-4036-b759-6c82525c4b69" 00:22:02.427 ], 00:22:02.427 "product_name": "Malloc disk", 00:22:02.427 "block_size": 512, 00:22:02.427 "num_blocks": 65536, 00:22:02.427 "uuid": "78c588fe-bc08-4036-b759-6c82525c4b69", 00:22:02.428 "assigned_rate_limits": { 00:22:02.428 "rw_ios_per_sec": 0, 00:22:02.428 "rw_mbytes_per_sec": 0, 00:22:02.428 "r_mbytes_per_sec": 0, 00:22:02.428 "w_mbytes_per_sec": 0 00:22:02.428 }, 00:22:02.428 "claimed": true, 00:22:02.428 "claim_type": "exclusive_write", 00:22:02.428 "zoned": false, 00:22:02.428 "supported_io_types": { 00:22:02.428 "read": true, 00:22:02.428 "write": true, 00:22:02.428 "unmap": true, 00:22:02.428 "flush": true, 00:22:02.428 "reset": true, 00:22:02.428 "nvme_admin": false, 00:22:02.428 "nvme_io": false, 00:22:02.428 "nvme_io_md": false, 00:22:02.428 "write_zeroes": true, 00:22:02.428 "zcopy": true, 00:22:02.428 "get_zone_info": false, 00:22:02.428 "zone_management": false, 00:22:02.428 "zone_append": false, 00:22:02.428 "compare": false, 00:22:02.428 "compare_and_write": false, 00:22:02.428 "abort": true, 00:22:02.428 "seek_hole": false, 00:22:02.428 "seek_data": false, 00:22:02.428 "copy": true, 00:22:02.428 "nvme_iov_md": false 00:22:02.428 }, 00:22:02.428 "memory_domains": [ 00:22:02.428 { 00:22:02.428 "dma_device_id": "system", 00:22:02.428 "dma_device_type": 1 00:22:02.428 }, 00:22:02.428 { 00:22:02.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.428 "dma_device_type": 2 00:22:02.428 } 00:22:02.428 ], 00:22:02.428 "driver_specific": {} 00:22:02.428 }' 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:02.428 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.686 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.686 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:02.686 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.686 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.686 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:02.686 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:02.945 [2024-07-24 16:38:59.648491] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:02.945 [2024-07-24 16:38:59.648527] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:02.945 [2024-07-24 16:38:59.648610] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:02.945 [2024-07-24 16:38:59.648689] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:02.945 [2024-07-24 16:38:59.648706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1678838 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1678838 ']' 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1678838 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1678838 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1678838' 00:22:02.945 killing process with pid 1678838 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1678838 00:22:02.945 [2024-07-24 16:38:59.727341] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:02.945 16:38:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1678838 00:22:03.511 [2024-07-24 16:39:00.207542] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.416 16:39:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:05.416 00:22:05.416 real 0m33.745s 00:22:05.416 user 0m58.999s 00:22:05.416 sys 0m5.739s 00:22:05.416 16:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:05.416 16:39:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.416 ************************************ 00:22:05.416 END TEST raid_state_function_test 00:22:05.416 ************************************ 00:22:05.417 16:39:01 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:05.417 16:39:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:05.417 16:39:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.417 16:39:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.417 ************************************ 00:22:05.417 START TEST raid_state_function_test_sb 00:22:05.417 ************************************ 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1685226 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1685226' 00:22:05.417 Process raid pid: 1685226 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1685226 /var/tmp/spdk-raid.sock 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1685226 ']' 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:05.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.417 16:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.417 [2024-07-24 16:39:02.147365] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:22:05.417 [2024-07-24 16:39:02.147484] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.676 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:05.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.676 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:05.676 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:05.677 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:05.677 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:05.677 [2024-07-24 16:39:02.375597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.936 [2024-07-24 16:39:02.666738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.195 [2024-07-24 16:39:03.019911] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.195 [2024-07-24 16:39:03.019946] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:06.454 16:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.454 16:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:06.454 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:06.713 [2024-07-24 16:39:03.419575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:06.713 [2024-07-24 16:39:03.419632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:06.713 [2024-07-24 16:39:03.419647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.714 [2024-07-24 16:39:03.419664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.714 [2024-07-24 16:39:03.419676] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.714 [2024-07-24 16:39:03.419692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.714 [2024-07-24 16:39:03.419703] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:06.714 [2024-07-24 16:39:03.419722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.714 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.973 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.973 "name": "Existed_Raid", 00:22:06.973 "uuid": "d58df64b-c9f2-4847-8206-83feaa09e3a2", 00:22:06.973 "strip_size_kb": 64, 00:22:06.973 "state": "configuring", 00:22:06.973 "raid_level": "raid0", 00:22:06.973 "superblock": true, 00:22:06.973 "num_base_bdevs": 4, 00:22:06.973 "num_base_bdevs_discovered": 0, 00:22:06.973 "num_base_bdevs_operational": 4, 00:22:06.973 "base_bdevs_list": [ 00:22:06.973 { 00:22:06.973 "name": "BaseBdev1", 00:22:06.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.973 "is_configured": false, 00:22:06.973 "data_offset": 0, 00:22:06.973 "data_size": 0 00:22:06.973 }, 00:22:06.973 { 00:22:06.973 "name": "BaseBdev2", 00:22:06.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.973 "is_configured": false, 00:22:06.973 "data_offset": 0, 00:22:06.973 "data_size": 0 00:22:06.973 }, 00:22:06.973 { 00:22:06.973 "name": "BaseBdev3", 00:22:06.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.973 "is_configured": false, 00:22:06.973 "data_offset": 0, 00:22:06.973 "data_size": 0 00:22:06.973 }, 00:22:06.973 { 00:22:06.973 "name": "BaseBdev4", 00:22:06.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.973 "is_configured": false, 00:22:06.973 "data_offset": 0, 00:22:06.973 "data_size": 0 00:22:06.973 } 00:22:06.973 ] 00:22:06.973 }' 00:22:06.973 16:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.973 16:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.541 16:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:07.800 [2024-07-24 16:39:04.450203] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:07.800 [2024-07-24 16:39:04.450243] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:22:07.800 16:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:08.059 [2024-07-24 16:39:04.682896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.059 [2024-07-24 16:39:04.682945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.059 [2024-07-24 16:39:04.682959] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.059 [2024-07-24 16:39:04.682983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.059 [2024-07-24 16:39:04.682994] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:08.059 [2024-07-24 16:39:04.683011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:08.059 [2024-07-24 16:39:04.683022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:08.059 [2024-07-24 16:39:04.683038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:08.059 16:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:08.317 [2024-07-24 16:39:04.958212] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.317 BaseBdev1 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:08.317 16:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:08.576 [ 00:22:08.576 { 00:22:08.576 "name": "BaseBdev1", 00:22:08.576 "aliases": [ 00:22:08.576 "717c95b4-2103-4e61-8e0f-ac1c4a5225bb" 00:22:08.576 ], 00:22:08.576 "product_name": "Malloc disk", 00:22:08.576 "block_size": 512, 00:22:08.576 "num_blocks": 65536, 00:22:08.576 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:08.576 "assigned_rate_limits": { 00:22:08.576 "rw_ios_per_sec": 0, 00:22:08.576 "rw_mbytes_per_sec": 0, 00:22:08.576 "r_mbytes_per_sec": 0, 00:22:08.576 "w_mbytes_per_sec": 0 00:22:08.576 }, 00:22:08.576 "claimed": true, 00:22:08.576 "claim_type": "exclusive_write", 00:22:08.576 "zoned": false, 00:22:08.576 "supported_io_types": { 00:22:08.576 "read": true, 00:22:08.576 "write": true, 00:22:08.576 "unmap": true, 00:22:08.576 "flush": true, 00:22:08.576 "reset": true, 00:22:08.576 "nvme_admin": false, 00:22:08.576 "nvme_io": false, 00:22:08.576 "nvme_io_md": false, 00:22:08.576 "write_zeroes": true, 00:22:08.576 "zcopy": true, 00:22:08.576 "get_zone_info": false, 00:22:08.576 "zone_management": false, 00:22:08.576 "zone_append": false, 00:22:08.576 "compare": false, 00:22:08.576 "compare_and_write": false, 00:22:08.576 "abort": true, 00:22:08.576 "seek_hole": false, 00:22:08.576 "seek_data": false, 00:22:08.576 "copy": true, 00:22:08.576 "nvme_iov_md": false 00:22:08.576 }, 00:22:08.576 "memory_domains": [ 00:22:08.576 { 00:22:08.576 "dma_device_id": "system", 00:22:08.576 "dma_device_type": 1 00:22:08.576 }, 00:22:08.576 { 00:22:08.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.576 "dma_device_type": 2 00:22:08.576 } 00:22:08.576 ], 00:22:08.576 "driver_specific": {} 00:22:08.576 } 00:22:08.576 ] 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.576 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.834 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.834 "name": "Existed_Raid", 00:22:08.834 "uuid": "d06506de-1334-41a4-a765-bbeff4aa928d", 00:22:08.834 "strip_size_kb": 64, 00:22:08.834 "state": "configuring", 00:22:08.834 "raid_level": "raid0", 00:22:08.834 "superblock": true, 00:22:08.834 "num_base_bdevs": 4, 00:22:08.834 "num_base_bdevs_discovered": 1, 00:22:08.834 "num_base_bdevs_operational": 4, 00:22:08.834 "base_bdevs_list": [ 00:22:08.834 { 00:22:08.834 "name": "BaseBdev1", 00:22:08.834 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:08.834 "is_configured": true, 00:22:08.834 "data_offset": 2048, 00:22:08.834 "data_size": 63488 00:22:08.834 }, 00:22:08.834 { 00:22:08.834 "name": "BaseBdev2", 00:22:08.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.834 "is_configured": false, 00:22:08.834 "data_offset": 0, 00:22:08.834 "data_size": 0 00:22:08.834 }, 00:22:08.834 { 00:22:08.834 "name": "BaseBdev3", 00:22:08.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.834 "is_configured": false, 00:22:08.834 "data_offset": 0, 00:22:08.834 "data_size": 0 00:22:08.834 }, 00:22:08.834 { 00:22:08.834 "name": "BaseBdev4", 00:22:08.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.834 "is_configured": false, 00:22:08.834 "data_offset": 0, 00:22:08.834 "data_size": 0 00:22:08.834 } 00:22:08.834 ] 00:22:08.834 }' 00:22:08.834 16:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.834 16:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:09.658 [2024-07-24 16:39:06.434260] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.658 [2024-07-24 16:39:06.434313] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:22:09.658 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:09.948 [2024-07-24 16:39:06.662982] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.949 [2024-07-24 16:39:06.665293] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.949 [2024-07-24 16:39:06.665336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.949 [2024-07-24 16:39:06.665350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:09.949 [2024-07-24 16:39:06.665367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:09.949 [2024-07-24 16:39:06.665379] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:09.949 [2024-07-24 16:39:06.665398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.949 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.207 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.207 "name": "Existed_Raid", 00:22:10.207 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:10.207 "strip_size_kb": 64, 00:22:10.207 "state": "configuring", 00:22:10.207 "raid_level": "raid0", 00:22:10.207 "superblock": true, 00:22:10.207 "num_base_bdevs": 4, 00:22:10.207 "num_base_bdevs_discovered": 1, 00:22:10.207 "num_base_bdevs_operational": 4, 00:22:10.207 "base_bdevs_list": [ 00:22:10.207 { 00:22:10.207 "name": "BaseBdev1", 00:22:10.207 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:10.207 "is_configured": true, 00:22:10.207 "data_offset": 2048, 00:22:10.207 "data_size": 63488 00:22:10.207 }, 00:22:10.207 { 00:22:10.207 "name": "BaseBdev2", 00:22:10.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.207 "is_configured": false, 00:22:10.207 "data_offset": 0, 00:22:10.207 "data_size": 0 00:22:10.207 }, 00:22:10.207 { 00:22:10.207 "name": "BaseBdev3", 00:22:10.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.207 "is_configured": false, 00:22:10.207 "data_offset": 0, 00:22:10.207 "data_size": 0 00:22:10.207 }, 00:22:10.207 { 00:22:10.207 "name": "BaseBdev4", 00:22:10.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.207 "is_configured": false, 00:22:10.207 "data_offset": 0, 00:22:10.207 "data_size": 0 00:22:10.207 } 00:22:10.207 ] 00:22:10.207 }' 00:22:10.207 16:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.207 16:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.775 16:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:11.035 [2024-07-24 16:39:07.727589] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.035 BaseBdev2 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:11.035 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:11.296 16:39:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:11.556 [ 00:22:11.556 { 00:22:11.556 "name": "BaseBdev2", 00:22:11.556 "aliases": [ 00:22:11.556 "b796f9bc-cff9-4bb2-861d-60ab1d5483d9" 00:22:11.556 ], 00:22:11.556 "product_name": "Malloc disk", 00:22:11.556 "block_size": 512, 00:22:11.556 "num_blocks": 65536, 00:22:11.556 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:11.556 "assigned_rate_limits": { 00:22:11.556 "rw_ios_per_sec": 0, 00:22:11.556 "rw_mbytes_per_sec": 0, 00:22:11.556 "r_mbytes_per_sec": 0, 00:22:11.556 "w_mbytes_per_sec": 0 00:22:11.556 }, 00:22:11.556 "claimed": true, 00:22:11.556 "claim_type": "exclusive_write", 00:22:11.556 "zoned": false, 00:22:11.556 "supported_io_types": { 00:22:11.556 "read": true, 00:22:11.556 "write": true, 00:22:11.556 "unmap": true, 00:22:11.556 "flush": true, 00:22:11.556 "reset": true, 00:22:11.556 "nvme_admin": false, 00:22:11.556 "nvme_io": false, 00:22:11.556 "nvme_io_md": false, 00:22:11.556 "write_zeroes": true, 00:22:11.556 "zcopy": true, 00:22:11.556 "get_zone_info": false, 00:22:11.556 "zone_management": false, 00:22:11.556 "zone_append": false, 00:22:11.556 "compare": false, 00:22:11.556 "compare_and_write": false, 00:22:11.556 "abort": true, 00:22:11.556 "seek_hole": false, 00:22:11.556 "seek_data": false, 00:22:11.556 "copy": true, 00:22:11.556 "nvme_iov_md": false 00:22:11.556 }, 00:22:11.556 "memory_domains": [ 00:22:11.556 { 00:22:11.557 "dma_device_id": "system", 00:22:11.557 "dma_device_type": 1 00:22:11.557 }, 00:22:11.557 { 00:22:11.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.557 "dma_device_type": 2 00:22:11.557 } 00:22:11.557 ], 00:22:11.557 "driver_specific": {} 00:22:11.557 } 00:22:11.557 ] 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.557 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.816 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.816 "name": "Existed_Raid", 00:22:11.816 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:11.816 "strip_size_kb": 64, 00:22:11.816 "state": "configuring", 00:22:11.816 "raid_level": "raid0", 00:22:11.816 "superblock": true, 00:22:11.816 "num_base_bdevs": 4, 00:22:11.816 "num_base_bdevs_discovered": 2, 00:22:11.816 "num_base_bdevs_operational": 4, 00:22:11.816 "base_bdevs_list": [ 00:22:11.816 { 00:22:11.816 "name": "BaseBdev1", 00:22:11.816 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:11.816 "is_configured": true, 00:22:11.816 "data_offset": 2048, 00:22:11.816 "data_size": 63488 00:22:11.816 }, 00:22:11.816 { 00:22:11.816 "name": "BaseBdev2", 00:22:11.816 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:11.816 "is_configured": true, 00:22:11.816 "data_offset": 2048, 00:22:11.816 "data_size": 63488 00:22:11.816 }, 00:22:11.816 { 00:22:11.816 "name": "BaseBdev3", 00:22:11.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.816 "is_configured": false, 00:22:11.816 "data_offset": 0, 00:22:11.816 "data_size": 0 00:22:11.816 }, 00:22:11.816 { 00:22:11.816 "name": "BaseBdev4", 00:22:11.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.816 "is_configured": false, 00:22:11.816 "data_offset": 0, 00:22:11.816 "data_size": 0 00:22:11.816 } 00:22:11.816 ] 00:22:11.816 }' 00:22:11.816 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.816 16:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.389 16:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:12.648 [2024-07-24 16:39:09.258420] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:12.648 BaseBdev3 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:12.648 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:12.907 [ 00:22:12.907 { 00:22:12.907 "name": "BaseBdev3", 00:22:12.907 "aliases": [ 00:22:12.907 "f0c18153-0dd1-442b-abdc-36e2fe647b57" 00:22:12.907 ], 00:22:12.907 "product_name": "Malloc disk", 00:22:12.907 "block_size": 512, 00:22:12.907 "num_blocks": 65536, 00:22:12.907 "uuid": "f0c18153-0dd1-442b-abdc-36e2fe647b57", 00:22:12.907 "assigned_rate_limits": { 00:22:12.907 "rw_ios_per_sec": 0, 00:22:12.907 "rw_mbytes_per_sec": 0, 00:22:12.907 "r_mbytes_per_sec": 0, 00:22:12.907 "w_mbytes_per_sec": 0 00:22:12.907 }, 00:22:12.907 "claimed": true, 00:22:12.907 "claim_type": "exclusive_write", 00:22:12.907 "zoned": false, 00:22:12.907 "supported_io_types": { 00:22:12.907 "read": true, 00:22:12.907 "write": true, 00:22:12.907 "unmap": true, 00:22:12.907 "flush": true, 00:22:12.907 "reset": true, 00:22:12.907 "nvme_admin": false, 00:22:12.907 "nvme_io": false, 00:22:12.907 "nvme_io_md": false, 00:22:12.907 "write_zeroes": true, 00:22:12.907 "zcopy": true, 00:22:12.907 "get_zone_info": false, 00:22:12.907 "zone_management": false, 00:22:12.907 "zone_append": false, 00:22:12.907 "compare": false, 00:22:12.907 "compare_and_write": false, 00:22:12.907 "abort": true, 00:22:12.907 "seek_hole": false, 00:22:12.907 "seek_data": false, 00:22:12.907 "copy": true, 00:22:12.907 "nvme_iov_md": false 00:22:12.907 }, 00:22:12.907 "memory_domains": [ 00:22:12.907 { 00:22:12.907 "dma_device_id": "system", 00:22:12.907 "dma_device_type": 1 00:22:12.907 }, 00:22:12.907 { 00:22:12.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.907 "dma_device_type": 2 00:22:12.907 } 00:22:12.907 ], 00:22:12.907 "driver_specific": {} 00:22:12.907 } 00:22:12.907 ] 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.907 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.908 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.167 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.167 "name": "Existed_Raid", 00:22:13.167 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:13.167 "strip_size_kb": 64, 00:22:13.167 "state": "configuring", 00:22:13.167 "raid_level": "raid0", 00:22:13.167 "superblock": true, 00:22:13.167 "num_base_bdevs": 4, 00:22:13.167 "num_base_bdevs_discovered": 3, 00:22:13.167 "num_base_bdevs_operational": 4, 00:22:13.167 "base_bdevs_list": [ 00:22:13.167 { 00:22:13.167 "name": "BaseBdev1", 00:22:13.167 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:13.167 "is_configured": true, 00:22:13.167 "data_offset": 2048, 00:22:13.167 "data_size": 63488 00:22:13.167 }, 00:22:13.167 { 00:22:13.167 "name": "BaseBdev2", 00:22:13.167 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:13.167 "is_configured": true, 00:22:13.167 "data_offset": 2048, 00:22:13.167 "data_size": 63488 00:22:13.167 }, 00:22:13.167 { 00:22:13.167 "name": "BaseBdev3", 00:22:13.167 "uuid": "f0c18153-0dd1-442b-abdc-36e2fe647b57", 00:22:13.167 "is_configured": true, 00:22:13.167 "data_offset": 2048, 00:22:13.167 "data_size": 63488 00:22:13.167 }, 00:22:13.167 { 00:22:13.167 "name": "BaseBdev4", 00:22:13.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.167 "is_configured": false, 00:22:13.167 "data_offset": 0, 00:22:13.167 "data_size": 0 00:22:13.167 } 00:22:13.167 ] 00:22:13.167 }' 00:22:13.167 16:39:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.167 16:39:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.735 16:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:13.994 [2024-07-24 16:39:10.784524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:13.994 [2024-07-24 16:39:10.784801] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:22:13.994 [2024-07-24 16:39:10.784821] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:13.994 [2024-07-24 16:39:10.785156] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:22:13.994 [2024-07-24 16:39:10.785395] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:22:13.994 [2024-07-24 16:39:10.785414] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:22:13.994 [2024-07-24 16:39:10.785601] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.994 BaseBdev4 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:13.994 16:39:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.253 16:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:14.513 [ 00:22:14.513 { 00:22:14.513 "name": "BaseBdev4", 00:22:14.513 "aliases": [ 00:22:14.513 "7ae94327-796d-411d-b5b7-8b6dd8171a4e" 00:22:14.513 ], 00:22:14.513 "product_name": "Malloc disk", 00:22:14.513 "block_size": 512, 00:22:14.513 "num_blocks": 65536, 00:22:14.513 "uuid": "7ae94327-796d-411d-b5b7-8b6dd8171a4e", 00:22:14.513 "assigned_rate_limits": { 00:22:14.513 "rw_ios_per_sec": 0, 00:22:14.513 "rw_mbytes_per_sec": 0, 00:22:14.513 "r_mbytes_per_sec": 0, 00:22:14.513 "w_mbytes_per_sec": 0 00:22:14.513 }, 00:22:14.513 "claimed": true, 00:22:14.513 "claim_type": "exclusive_write", 00:22:14.513 "zoned": false, 00:22:14.513 "supported_io_types": { 00:22:14.513 "read": true, 00:22:14.513 "write": true, 00:22:14.513 "unmap": true, 00:22:14.513 "flush": true, 00:22:14.513 "reset": true, 00:22:14.513 "nvme_admin": false, 00:22:14.513 "nvme_io": false, 00:22:14.513 "nvme_io_md": false, 00:22:14.513 "write_zeroes": true, 00:22:14.513 "zcopy": true, 00:22:14.513 "get_zone_info": false, 00:22:14.513 "zone_management": false, 00:22:14.513 "zone_append": false, 00:22:14.513 "compare": false, 00:22:14.513 "compare_and_write": false, 00:22:14.513 "abort": true, 00:22:14.513 "seek_hole": false, 00:22:14.513 "seek_data": false, 00:22:14.513 "copy": true, 00:22:14.513 "nvme_iov_md": false 00:22:14.513 }, 00:22:14.513 "memory_domains": [ 00:22:14.513 { 00:22:14.513 "dma_device_id": "system", 00:22:14.513 "dma_device_type": 1 00:22:14.513 }, 00:22:14.513 { 00:22:14.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.513 "dma_device_type": 2 00:22:14.513 } 00:22:14.513 ], 00:22:14.513 "driver_specific": {} 00:22:14.513 } 00:22:14.513 ] 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.513 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.772 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.772 "name": "Existed_Raid", 00:22:14.772 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:14.772 "strip_size_kb": 64, 00:22:14.772 "state": "online", 00:22:14.772 "raid_level": "raid0", 00:22:14.772 "superblock": true, 00:22:14.772 "num_base_bdevs": 4, 00:22:14.772 "num_base_bdevs_discovered": 4, 00:22:14.772 "num_base_bdevs_operational": 4, 00:22:14.772 "base_bdevs_list": [ 00:22:14.772 { 00:22:14.772 "name": "BaseBdev1", 00:22:14.772 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:14.772 "is_configured": true, 00:22:14.772 "data_offset": 2048, 00:22:14.772 "data_size": 63488 00:22:14.772 }, 00:22:14.772 { 00:22:14.772 "name": "BaseBdev2", 00:22:14.772 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:14.772 "is_configured": true, 00:22:14.772 "data_offset": 2048, 00:22:14.772 "data_size": 63488 00:22:14.772 }, 00:22:14.772 { 00:22:14.772 "name": "BaseBdev3", 00:22:14.772 "uuid": "f0c18153-0dd1-442b-abdc-36e2fe647b57", 00:22:14.772 "is_configured": true, 00:22:14.772 "data_offset": 2048, 00:22:14.772 "data_size": 63488 00:22:14.772 }, 00:22:14.772 { 00:22:14.772 "name": "BaseBdev4", 00:22:14.772 "uuid": "7ae94327-796d-411d-b5b7-8b6dd8171a4e", 00:22:14.772 "is_configured": true, 00:22:14.772 "data_offset": 2048, 00:22:14.772 "data_size": 63488 00:22:14.772 } 00:22:14.772 ] 00:22:14.772 }' 00:22:14.772 16:39:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.772 16:39:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:15.339 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:15.598 [2024-07-24 16:39:12.277033] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.598 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:15.598 "name": "Existed_Raid", 00:22:15.598 "aliases": [ 00:22:15.598 "4c76dc11-1ebd-4859-a686-42913e5224a7" 00:22:15.598 ], 00:22:15.598 "product_name": "Raid Volume", 00:22:15.598 "block_size": 512, 00:22:15.598 "num_blocks": 253952, 00:22:15.598 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:15.598 "assigned_rate_limits": { 00:22:15.598 "rw_ios_per_sec": 0, 00:22:15.598 "rw_mbytes_per_sec": 0, 00:22:15.598 "r_mbytes_per_sec": 0, 00:22:15.598 "w_mbytes_per_sec": 0 00:22:15.598 }, 00:22:15.598 "claimed": false, 00:22:15.598 "zoned": false, 00:22:15.598 "supported_io_types": { 00:22:15.598 "read": true, 00:22:15.598 "write": true, 00:22:15.598 "unmap": true, 00:22:15.598 "flush": true, 00:22:15.598 "reset": true, 00:22:15.598 "nvme_admin": false, 00:22:15.598 "nvme_io": false, 00:22:15.598 "nvme_io_md": false, 00:22:15.598 "write_zeroes": true, 00:22:15.598 "zcopy": false, 00:22:15.598 "get_zone_info": false, 00:22:15.598 "zone_management": false, 00:22:15.598 "zone_append": false, 00:22:15.598 "compare": false, 00:22:15.598 "compare_and_write": false, 00:22:15.598 "abort": false, 00:22:15.598 "seek_hole": false, 00:22:15.598 "seek_data": false, 00:22:15.598 "copy": false, 00:22:15.598 "nvme_iov_md": false 00:22:15.598 }, 00:22:15.598 "memory_domains": [ 00:22:15.598 { 00:22:15.598 "dma_device_id": "system", 00:22:15.598 "dma_device_type": 1 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.598 "dma_device_type": 2 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "system", 00:22:15.598 "dma_device_type": 1 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.598 "dma_device_type": 2 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "system", 00:22:15.598 "dma_device_type": 1 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.598 "dma_device_type": 2 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "system", 00:22:15.598 "dma_device_type": 1 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.598 "dma_device_type": 2 00:22:15.598 } 00:22:15.598 ], 00:22:15.598 "driver_specific": { 00:22:15.598 "raid": { 00:22:15.598 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:15.598 "strip_size_kb": 64, 00:22:15.598 "state": "online", 00:22:15.598 "raid_level": "raid0", 00:22:15.598 "superblock": true, 00:22:15.598 "num_base_bdevs": 4, 00:22:15.598 "num_base_bdevs_discovered": 4, 00:22:15.598 "num_base_bdevs_operational": 4, 00:22:15.598 "base_bdevs_list": [ 00:22:15.598 { 00:22:15.598 "name": "BaseBdev1", 00:22:15.598 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:15.598 "is_configured": true, 00:22:15.598 "data_offset": 2048, 00:22:15.598 "data_size": 63488 00:22:15.598 }, 00:22:15.598 { 00:22:15.598 "name": "BaseBdev2", 00:22:15.598 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:15.598 "is_configured": true, 00:22:15.598 "data_offset": 2048, 00:22:15.599 "data_size": 63488 00:22:15.599 }, 00:22:15.599 { 00:22:15.599 "name": "BaseBdev3", 00:22:15.599 "uuid": "f0c18153-0dd1-442b-abdc-36e2fe647b57", 00:22:15.599 "is_configured": true, 00:22:15.599 "data_offset": 2048, 00:22:15.599 "data_size": 63488 00:22:15.599 }, 00:22:15.599 { 00:22:15.599 "name": "BaseBdev4", 00:22:15.599 "uuid": "7ae94327-796d-411d-b5b7-8b6dd8171a4e", 00:22:15.599 "is_configured": true, 00:22:15.599 "data_offset": 2048, 00:22:15.599 "data_size": 63488 00:22:15.599 } 00:22:15.599 ] 00:22:15.599 } 00:22:15.599 } 00:22:15.599 }' 00:22:15.599 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.599 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:15.599 BaseBdev2 00:22:15.599 BaseBdev3 00:22:15.599 BaseBdev4' 00:22:15.599 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.599 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:15.599 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.857 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.857 "name": "BaseBdev1", 00:22:15.857 "aliases": [ 00:22:15.857 "717c95b4-2103-4e61-8e0f-ac1c4a5225bb" 00:22:15.857 ], 00:22:15.857 "product_name": "Malloc disk", 00:22:15.857 "block_size": 512, 00:22:15.857 "num_blocks": 65536, 00:22:15.857 "uuid": "717c95b4-2103-4e61-8e0f-ac1c4a5225bb", 00:22:15.857 "assigned_rate_limits": { 00:22:15.857 "rw_ios_per_sec": 0, 00:22:15.857 "rw_mbytes_per_sec": 0, 00:22:15.857 "r_mbytes_per_sec": 0, 00:22:15.857 "w_mbytes_per_sec": 0 00:22:15.857 }, 00:22:15.857 "claimed": true, 00:22:15.857 "claim_type": "exclusive_write", 00:22:15.857 "zoned": false, 00:22:15.857 "supported_io_types": { 00:22:15.857 "read": true, 00:22:15.857 "write": true, 00:22:15.857 "unmap": true, 00:22:15.857 "flush": true, 00:22:15.857 "reset": true, 00:22:15.857 "nvme_admin": false, 00:22:15.857 "nvme_io": false, 00:22:15.857 "nvme_io_md": false, 00:22:15.857 "write_zeroes": true, 00:22:15.857 "zcopy": true, 00:22:15.857 "get_zone_info": false, 00:22:15.857 "zone_management": false, 00:22:15.857 "zone_append": false, 00:22:15.857 "compare": false, 00:22:15.857 "compare_and_write": false, 00:22:15.857 "abort": true, 00:22:15.857 "seek_hole": false, 00:22:15.857 "seek_data": false, 00:22:15.857 "copy": true, 00:22:15.857 "nvme_iov_md": false 00:22:15.857 }, 00:22:15.857 "memory_domains": [ 00:22:15.857 { 00:22:15.857 "dma_device_id": "system", 00:22:15.857 "dma_device_type": 1 00:22:15.857 }, 00:22:15.857 { 00:22:15.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.857 "dma_device_type": 2 00:22:15.857 } 00:22:15.857 ], 00:22:15.857 "driver_specific": {} 00:22:15.857 }' 00:22:15.857 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.857 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.857 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.857 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.857 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:16.116 16:39:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:16.374 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:16.374 "name": "BaseBdev2", 00:22:16.374 "aliases": [ 00:22:16.374 "b796f9bc-cff9-4bb2-861d-60ab1d5483d9" 00:22:16.374 ], 00:22:16.374 "product_name": "Malloc disk", 00:22:16.374 "block_size": 512, 00:22:16.374 "num_blocks": 65536, 00:22:16.374 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:16.374 "assigned_rate_limits": { 00:22:16.374 "rw_ios_per_sec": 0, 00:22:16.374 "rw_mbytes_per_sec": 0, 00:22:16.374 "r_mbytes_per_sec": 0, 00:22:16.374 "w_mbytes_per_sec": 0 00:22:16.374 }, 00:22:16.374 "claimed": true, 00:22:16.374 "claim_type": "exclusive_write", 00:22:16.374 "zoned": false, 00:22:16.374 "supported_io_types": { 00:22:16.374 "read": true, 00:22:16.374 "write": true, 00:22:16.374 "unmap": true, 00:22:16.374 "flush": true, 00:22:16.374 "reset": true, 00:22:16.374 "nvme_admin": false, 00:22:16.374 "nvme_io": false, 00:22:16.374 "nvme_io_md": false, 00:22:16.375 "write_zeroes": true, 00:22:16.375 "zcopy": true, 00:22:16.375 "get_zone_info": false, 00:22:16.375 "zone_management": false, 00:22:16.375 "zone_append": false, 00:22:16.375 "compare": false, 00:22:16.375 "compare_and_write": false, 00:22:16.375 "abort": true, 00:22:16.375 "seek_hole": false, 00:22:16.375 "seek_data": false, 00:22:16.375 "copy": true, 00:22:16.375 "nvme_iov_md": false 00:22:16.375 }, 00:22:16.375 "memory_domains": [ 00:22:16.375 { 00:22:16.375 "dma_device_id": "system", 00:22:16.375 "dma_device_type": 1 00:22:16.375 }, 00:22:16.375 { 00:22:16.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.375 "dma_device_type": 2 00:22:16.375 } 00:22:16.375 ], 00:22:16.375 "driver_specific": {} 00:22:16.375 }' 00:22:16.375 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.375 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.375 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.375 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.375 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:16.633 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:16.891 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:16.891 "name": "BaseBdev3", 00:22:16.891 "aliases": [ 00:22:16.891 "f0c18153-0dd1-442b-abdc-36e2fe647b57" 00:22:16.891 ], 00:22:16.891 "product_name": "Malloc disk", 00:22:16.891 "block_size": 512, 00:22:16.891 "num_blocks": 65536, 00:22:16.891 "uuid": "f0c18153-0dd1-442b-abdc-36e2fe647b57", 00:22:16.891 "assigned_rate_limits": { 00:22:16.891 "rw_ios_per_sec": 0, 00:22:16.891 "rw_mbytes_per_sec": 0, 00:22:16.891 "r_mbytes_per_sec": 0, 00:22:16.891 "w_mbytes_per_sec": 0 00:22:16.891 }, 00:22:16.891 "claimed": true, 00:22:16.891 "claim_type": "exclusive_write", 00:22:16.891 "zoned": false, 00:22:16.891 "supported_io_types": { 00:22:16.891 "read": true, 00:22:16.891 "write": true, 00:22:16.891 "unmap": true, 00:22:16.891 "flush": true, 00:22:16.891 "reset": true, 00:22:16.891 "nvme_admin": false, 00:22:16.891 "nvme_io": false, 00:22:16.891 "nvme_io_md": false, 00:22:16.891 "write_zeroes": true, 00:22:16.891 "zcopy": true, 00:22:16.891 "get_zone_info": false, 00:22:16.891 "zone_management": false, 00:22:16.891 "zone_append": false, 00:22:16.891 "compare": false, 00:22:16.891 "compare_and_write": false, 00:22:16.891 "abort": true, 00:22:16.891 "seek_hole": false, 00:22:16.891 "seek_data": false, 00:22:16.891 "copy": true, 00:22:16.891 "nvme_iov_md": false 00:22:16.891 }, 00:22:16.891 "memory_domains": [ 00:22:16.891 { 00:22:16.891 "dma_device_id": "system", 00:22:16.891 "dma_device_type": 1 00:22:16.891 }, 00:22:16.891 { 00:22:16.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.891 "dma_device_type": 2 00:22:16.891 } 00:22:16.891 ], 00:22:16.891 "driver_specific": {} 00:22:16.891 }' 00:22:16.891 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.891 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.891 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.891 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.149 16:39:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.407 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:17.407 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.407 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:17.407 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.407 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.407 "name": "BaseBdev4", 00:22:17.407 "aliases": [ 00:22:17.407 "7ae94327-796d-411d-b5b7-8b6dd8171a4e" 00:22:17.407 ], 00:22:17.407 "product_name": "Malloc disk", 00:22:17.407 "block_size": 512, 00:22:17.407 "num_blocks": 65536, 00:22:17.407 "uuid": "7ae94327-796d-411d-b5b7-8b6dd8171a4e", 00:22:17.407 "assigned_rate_limits": { 00:22:17.407 "rw_ios_per_sec": 0, 00:22:17.407 "rw_mbytes_per_sec": 0, 00:22:17.407 "r_mbytes_per_sec": 0, 00:22:17.407 "w_mbytes_per_sec": 0 00:22:17.407 }, 00:22:17.407 "claimed": true, 00:22:17.407 "claim_type": "exclusive_write", 00:22:17.407 "zoned": false, 00:22:17.407 "supported_io_types": { 00:22:17.407 "read": true, 00:22:17.407 "write": true, 00:22:17.407 "unmap": true, 00:22:17.407 "flush": true, 00:22:17.407 "reset": true, 00:22:17.407 "nvme_admin": false, 00:22:17.407 "nvme_io": false, 00:22:17.407 "nvme_io_md": false, 00:22:17.407 "write_zeroes": true, 00:22:17.407 "zcopy": true, 00:22:17.407 "get_zone_info": false, 00:22:17.407 "zone_management": false, 00:22:17.407 "zone_append": false, 00:22:17.407 "compare": false, 00:22:17.407 "compare_and_write": false, 00:22:17.407 "abort": true, 00:22:17.407 "seek_hole": false, 00:22:17.407 "seek_data": false, 00:22:17.407 "copy": true, 00:22:17.407 "nvme_iov_md": false 00:22:17.407 }, 00:22:17.407 "memory_domains": [ 00:22:17.407 { 00:22:17.407 "dma_device_id": "system", 00:22:17.407 "dma_device_type": 1 00:22:17.407 }, 00:22:17.407 { 00:22:17.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.407 "dma_device_type": 2 00:22:17.407 } 00:22:17.407 ], 00:22:17.407 "driver_specific": {} 00:22:17.407 }' 00:22:17.407 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:17.665 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:17.923 [2024-07-24 16:39:14.711300] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:17.923 [2024-07-24 16:39:14.711336] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.923 [2024-07-24 16:39:14.711395] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.923 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:18.181 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:18.181 "name": "Existed_Raid", 00:22:18.181 "uuid": "4c76dc11-1ebd-4859-a686-42913e5224a7", 00:22:18.181 "strip_size_kb": 64, 00:22:18.181 "state": "offline", 00:22:18.181 "raid_level": "raid0", 00:22:18.181 "superblock": true, 00:22:18.181 "num_base_bdevs": 4, 00:22:18.181 "num_base_bdevs_discovered": 3, 00:22:18.181 "num_base_bdevs_operational": 3, 00:22:18.181 "base_bdevs_list": [ 00:22:18.181 { 00:22:18.181 "name": null, 00:22:18.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.181 "is_configured": false, 00:22:18.181 "data_offset": 2048, 00:22:18.181 "data_size": 63488 00:22:18.181 }, 00:22:18.181 { 00:22:18.181 "name": "BaseBdev2", 00:22:18.181 "uuid": "b796f9bc-cff9-4bb2-861d-60ab1d5483d9", 00:22:18.181 "is_configured": true, 00:22:18.181 "data_offset": 2048, 00:22:18.181 "data_size": 63488 00:22:18.181 }, 00:22:18.181 { 00:22:18.181 "name": "BaseBdev3", 00:22:18.181 "uuid": "f0c18153-0dd1-442b-abdc-36e2fe647b57", 00:22:18.181 "is_configured": true, 00:22:18.181 "data_offset": 2048, 00:22:18.181 "data_size": 63488 00:22:18.181 }, 00:22:18.181 { 00:22:18.181 "name": "BaseBdev4", 00:22:18.181 "uuid": "7ae94327-796d-411d-b5b7-8b6dd8171a4e", 00:22:18.181 "is_configured": true, 00:22:18.181 "data_offset": 2048, 00:22:18.181 "data_size": 63488 00:22:18.181 } 00:22:18.181 ] 00:22:18.181 }' 00:22:18.181 16:39:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:18.181 16:39:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.746 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:18.746 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:18.746 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.746 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:19.004 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:19.004 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:19.004 16:39:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:19.263 [2024-07-24 16:39:16.008028] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:19.521 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:19.521 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:19.521 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:19.521 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.779 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:19.779 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:19.779 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:19.779 [2024-07-24 16:39:16.589492] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:20.037 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:20.037 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:20.037 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:20.037 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.295 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:20.295 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:20.295 16:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:20.553 [2024-07-24 16:39:17.177225] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:20.553 [2024-07-24 16:39:17.177283] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:22:20.553 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:20.553 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:20.553 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.553 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:20.812 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:20.812 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:20.812 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:20.812 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:20.812 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:20.812 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:21.070 BaseBdev2 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:21.070 16:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.329 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:21.587 [ 00:22:21.587 { 00:22:21.587 "name": "BaseBdev2", 00:22:21.587 "aliases": [ 00:22:21.587 "5f937e5e-3a75-457a-a1ab-a2813a91d009" 00:22:21.587 ], 00:22:21.587 "product_name": "Malloc disk", 00:22:21.587 "block_size": 512, 00:22:21.587 "num_blocks": 65536, 00:22:21.587 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:21.587 "assigned_rate_limits": { 00:22:21.587 "rw_ios_per_sec": 0, 00:22:21.587 "rw_mbytes_per_sec": 0, 00:22:21.587 "r_mbytes_per_sec": 0, 00:22:21.587 "w_mbytes_per_sec": 0 00:22:21.587 }, 00:22:21.587 "claimed": false, 00:22:21.587 "zoned": false, 00:22:21.587 "supported_io_types": { 00:22:21.587 "read": true, 00:22:21.587 "write": true, 00:22:21.588 "unmap": true, 00:22:21.588 "flush": true, 00:22:21.588 "reset": true, 00:22:21.588 "nvme_admin": false, 00:22:21.588 "nvme_io": false, 00:22:21.588 "nvme_io_md": false, 00:22:21.588 "write_zeroes": true, 00:22:21.588 "zcopy": true, 00:22:21.588 "get_zone_info": false, 00:22:21.588 "zone_management": false, 00:22:21.588 "zone_append": false, 00:22:21.588 "compare": false, 00:22:21.588 "compare_and_write": false, 00:22:21.588 "abort": true, 00:22:21.588 "seek_hole": false, 00:22:21.588 "seek_data": false, 00:22:21.588 "copy": true, 00:22:21.588 "nvme_iov_md": false 00:22:21.588 }, 00:22:21.588 "memory_domains": [ 00:22:21.588 { 00:22:21.588 "dma_device_id": "system", 00:22:21.588 "dma_device_type": 1 00:22:21.588 }, 00:22:21.588 { 00:22:21.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.588 "dma_device_type": 2 00:22:21.588 } 00:22:21.588 ], 00:22:21.588 "driver_specific": {} 00:22:21.588 } 00:22:21.588 ] 00:22:21.588 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:21.588 16:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:21.588 16:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:21.588 16:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:21.846 BaseBdev3 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:21.846 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.105 16:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:22.363 [ 00:22:22.363 { 00:22:22.363 "name": "BaseBdev3", 00:22:22.363 "aliases": [ 00:22:22.363 "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c" 00:22:22.363 ], 00:22:22.363 "product_name": "Malloc disk", 00:22:22.363 "block_size": 512, 00:22:22.363 "num_blocks": 65536, 00:22:22.363 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:22.363 "assigned_rate_limits": { 00:22:22.363 "rw_ios_per_sec": 0, 00:22:22.363 "rw_mbytes_per_sec": 0, 00:22:22.363 "r_mbytes_per_sec": 0, 00:22:22.363 "w_mbytes_per_sec": 0 00:22:22.363 }, 00:22:22.363 "claimed": false, 00:22:22.363 "zoned": false, 00:22:22.363 "supported_io_types": { 00:22:22.363 "read": true, 00:22:22.363 "write": true, 00:22:22.363 "unmap": true, 00:22:22.363 "flush": true, 00:22:22.363 "reset": true, 00:22:22.363 "nvme_admin": false, 00:22:22.363 "nvme_io": false, 00:22:22.363 "nvme_io_md": false, 00:22:22.363 "write_zeroes": true, 00:22:22.363 "zcopy": true, 00:22:22.363 "get_zone_info": false, 00:22:22.363 "zone_management": false, 00:22:22.363 "zone_append": false, 00:22:22.405 "compare": false, 00:22:22.405 "compare_and_write": false, 00:22:22.405 "abort": true, 00:22:22.405 "seek_hole": false, 00:22:22.405 "seek_data": false, 00:22:22.405 "copy": true, 00:22:22.405 "nvme_iov_md": false 00:22:22.405 }, 00:22:22.405 "memory_domains": [ 00:22:22.405 { 00:22:22.405 "dma_device_id": "system", 00:22:22.405 "dma_device_type": 1 00:22:22.405 }, 00:22:22.405 { 00:22:22.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.405 "dma_device_type": 2 00:22:22.405 } 00:22:22.405 ], 00:22:22.405 "driver_specific": {} 00:22:22.405 } 00:22:22.405 ] 00:22:22.405 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:22.405 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:22.405 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:22.405 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:22.664 BaseBdev4 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:22.664 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:22.922 [ 00:22:22.922 { 00:22:22.922 "name": "BaseBdev4", 00:22:22.922 "aliases": [ 00:22:22.922 "cbb6d82b-01e2-48b5-80b8-0970b5a586e6" 00:22:22.922 ], 00:22:22.922 "product_name": "Malloc disk", 00:22:22.922 "block_size": 512, 00:22:22.922 "num_blocks": 65536, 00:22:22.923 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:22.923 "assigned_rate_limits": { 00:22:22.923 "rw_ios_per_sec": 0, 00:22:22.923 "rw_mbytes_per_sec": 0, 00:22:22.923 "r_mbytes_per_sec": 0, 00:22:22.923 "w_mbytes_per_sec": 0 00:22:22.923 }, 00:22:22.923 "claimed": false, 00:22:22.923 "zoned": false, 00:22:22.923 "supported_io_types": { 00:22:22.923 "read": true, 00:22:22.923 "write": true, 00:22:22.923 "unmap": true, 00:22:22.923 "flush": true, 00:22:22.923 "reset": true, 00:22:22.923 "nvme_admin": false, 00:22:22.923 "nvme_io": false, 00:22:22.923 "nvme_io_md": false, 00:22:22.923 "write_zeroes": true, 00:22:22.923 "zcopy": true, 00:22:22.923 "get_zone_info": false, 00:22:22.923 "zone_management": false, 00:22:22.923 "zone_append": false, 00:22:22.923 "compare": false, 00:22:22.923 "compare_and_write": false, 00:22:22.923 "abort": true, 00:22:22.923 "seek_hole": false, 00:22:22.923 "seek_data": false, 00:22:22.923 "copy": true, 00:22:22.923 "nvme_iov_md": false 00:22:22.923 }, 00:22:22.923 "memory_domains": [ 00:22:22.923 { 00:22:22.923 "dma_device_id": "system", 00:22:22.923 "dma_device_type": 1 00:22:22.923 }, 00:22:22.923 { 00:22:22.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:22.923 "dma_device_type": 2 00:22:22.923 } 00:22:22.923 ], 00:22:22.923 "driver_specific": {} 00:22:22.923 } 00:22:22.923 ] 00:22:22.923 16:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:22.923 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:22.923 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:22.923 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:23.181 [2024-07-24 16:39:19.934402] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:23.181 [2024-07-24 16:39:19.934447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:23.181 [2024-07-24 16:39:19.934477] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:23.181 [2024-07-24 16:39:19.936776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:23.181 [2024-07-24 16:39:19.936841] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.181 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.182 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.182 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.182 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.182 16:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.440 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.440 "name": "Existed_Raid", 00:22:23.440 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:23.440 "strip_size_kb": 64, 00:22:23.440 "state": "configuring", 00:22:23.440 "raid_level": "raid0", 00:22:23.440 "superblock": true, 00:22:23.440 "num_base_bdevs": 4, 00:22:23.440 "num_base_bdevs_discovered": 3, 00:22:23.440 "num_base_bdevs_operational": 4, 00:22:23.440 "base_bdevs_list": [ 00:22:23.440 { 00:22:23.440 "name": "BaseBdev1", 00:22:23.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.440 "is_configured": false, 00:22:23.440 "data_offset": 0, 00:22:23.440 "data_size": 0 00:22:23.440 }, 00:22:23.440 { 00:22:23.440 "name": "BaseBdev2", 00:22:23.440 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:23.440 "is_configured": true, 00:22:23.440 "data_offset": 2048, 00:22:23.440 "data_size": 63488 00:22:23.440 }, 00:22:23.440 { 00:22:23.440 "name": "BaseBdev3", 00:22:23.440 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:23.440 "is_configured": true, 00:22:23.440 "data_offset": 2048, 00:22:23.440 "data_size": 63488 00:22:23.440 }, 00:22:23.440 { 00:22:23.440 "name": "BaseBdev4", 00:22:23.440 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:23.440 "is_configured": true, 00:22:23.440 "data_offset": 2048, 00:22:23.440 "data_size": 63488 00:22:23.440 } 00:22:23.440 ] 00:22:23.440 }' 00:22:23.440 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.440 16:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:24.034 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:24.292 [2024-07-24 16:39:20.921012] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:24.292 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:24.292 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.293 16:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:24.551 16:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:24.551 "name": "Existed_Raid", 00:22:24.551 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:24.551 "strip_size_kb": 64, 00:22:24.551 "state": "configuring", 00:22:24.551 "raid_level": "raid0", 00:22:24.551 "superblock": true, 00:22:24.551 "num_base_bdevs": 4, 00:22:24.551 "num_base_bdevs_discovered": 2, 00:22:24.551 "num_base_bdevs_operational": 4, 00:22:24.551 "base_bdevs_list": [ 00:22:24.551 { 00:22:24.551 "name": "BaseBdev1", 00:22:24.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.551 "is_configured": false, 00:22:24.551 "data_offset": 0, 00:22:24.551 "data_size": 0 00:22:24.551 }, 00:22:24.551 { 00:22:24.551 "name": null, 00:22:24.551 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:24.551 "is_configured": false, 00:22:24.551 "data_offset": 2048, 00:22:24.551 "data_size": 63488 00:22:24.551 }, 00:22:24.551 { 00:22:24.551 "name": "BaseBdev3", 00:22:24.551 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:24.551 "is_configured": true, 00:22:24.551 "data_offset": 2048, 00:22:24.551 "data_size": 63488 00:22:24.551 }, 00:22:24.551 { 00:22:24.551 "name": "BaseBdev4", 00:22:24.551 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:24.551 "is_configured": true, 00:22:24.551 "data_offset": 2048, 00:22:24.551 "data_size": 63488 00:22:24.551 } 00:22:24.551 ] 00:22:24.551 }' 00:22:24.551 16:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:24.551 16:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.117 16:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.117 16:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:25.117 16:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:25.117 16:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:25.374 [2024-07-24 16:39:22.175053] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.374 BaseBdev1 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:25.374 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:25.631 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:25.889 [ 00:22:25.889 { 00:22:25.889 "name": "BaseBdev1", 00:22:25.889 "aliases": [ 00:22:25.889 "797583b8-2500-4464-a75b-d43b90cfb59e" 00:22:25.889 ], 00:22:25.889 "product_name": "Malloc disk", 00:22:25.889 "block_size": 512, 00:22:25.889 "num_blocks": 65536, 00:22:25.889 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:25.889 "assigned_rate_limits": { 00:22:25.889 "rw_ios_per_sec": 0, 00:22:25.889 "rw_mbytes_per_sec": 0, 00:22:25.889 "r_mbytes_per_sec": 0, 00:22:25.889 "w_mbytes_per_sec": 0 00:22:25.889 }, 00:22:25.889 "claimed": true, 00:22:25.889 "claim_type": "exclusive_write", 00:22:25.889 "zoned": false, 00:22:25.889 "supported_io_types": { 00:22:25.889 "read": true, 00:22:25.889 "write": true, 00:22:25.889 "unmap": true, 00:22:25.889 "flush": true, 00:22:25.889 "reset": true, 00:22:25.889 "nvme_admin": false, 00:22:25.889 "nvme_io": false, 00:22:25.889 "nvme_io_md": false, 00:22:25.889 "write_zeroes": true, 00:22:25.889 "zcopy": true, 00:22:25.889 "get_zone_info": false, 00:22:25.889 "zone_management": false, 00:22:25.889 "zone_append": false, 00:22:25.889 "compare": false, 00:22:25.889 "compare_and_write": false, 00:22:25.889 "abort": true, 00:22:25.889 "seek_hole": false, 00:22:25.889 "seek_data": false, 00:22:25.889 "copy": true, 00:22:25.889 "nvme_iov_md": false 00:22:25.889 }, 00:22:25.889 "memory_domains": [ 00:22:25.889 { 00:22:25.889 "dma_device_id": "system", 00:22:25.889 "dma_device_type": 1 00:22:25.889 }, 00:22:25.889 { 00:22:25.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.889 "dma_device_type": 2 00:22:25.889 } 00:22:25.889 ], 00:22:25.889 "driver_specific": {} 00:22:25.889 } 00:22:25.889 ] 00:22:25.889 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:25.889 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:25.889 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.889 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:25.889 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.890 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.148 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.148 "name": "Existed_Raid", 00:22:26.148 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:26.148 "strip_size_kb": 64, 00:22:26.148 "state": "configuring", 00:22:26.148 "raid_level": "raid0", 00:22:26.148 "superblock": true, 00:22:26.148 "num_base_bdevs": 4, 00:22:26.148 "num_base_bdevs_discovered": 3, 00:22:26.148 "num_base_bdevs_operational": 4, 00:22:26.148 "base_bdevs_list": [ 00:22:26.148 { 00:22:26.148 "name": "BaseBdev1", 00:22:26.148 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:26.148 "is_configured": true, 00:22:26.148 "data_offset": 2048, 00:22:26.148 "data_size": 63488 00:22:26.148 }, 00:22:26.148 { 00:22:26.148 "name": null, 00:22:26.148 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:26.148 "is_configured": false, 00:22:26.148 "data_offset": 2048, 00:22:26.148 "data_size": 63488 00:22:26.148 }, 00:22:26.148 { 00:22:26.148 "name": "BaseBdev3", 00:22:26.148 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:26.148 "is_configured": true, 00:22:26.148 "data_offset": 2048, 00:22:26.148 "data_size": 63488 00:22:26.148 }, 00:22:26.148 { 00:22:26.148 "name": "BaseBdev4", 00:22:26.148 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:26.148 "is_configured": true, 00:22:26.148 "data_offset": 2048, 00:22:26.148 "data_size": 63488 00:22:26.148 } 00:22:26.148 ] 00:22:26.148 }' 00:22:26.148 16:39:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.148 16:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.713 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.713 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:26.970 [2024-07-24 16:39:23.739394] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.970 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:27.228 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.228 "name": "Existed_Raid", 00:22:27.228 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:27.228 "strip_size_kb": 64, 00:22:27.228 "state": "configuring", 00:22:27.228 "raid_level": "raid0", 00:22:27.228 "superblock": true, 00:22:27.228 "num_base_bdevs": 4, 00:22:27.228 "num_base_bdevs_discovered": 2, 00:22:27.228 "num_base_bdevs_operational": 4, 00:22:27.228 "base_bdevs_list": [ 00:22:27.228 { 00:22:27.228 "name": "BaseBdev1", 00:22:27.228 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:27.228 "is_configured": true, 00:22:27.228 "data_offset": 2048, 00:22:27.228 "data_size": 63488 00:22:27.228 }, 00:22:27.228 { 00:22:27.228 "name": null, 00:22:27.228 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:27.228 "is_configured": false, 00:22:27.228 "data_offset": 2048, 00:22:27.228 "data_size": 63488 00:22:27.228 }, 00:22:27.228 { 00:22:27.228 "name": null, 00:22:27.228 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:27.228 "is_configured": false, 00:22:27.228 "data_offset": 2048, 00:22:27.228 "data_size": 63488 00:22:27.228 }, 00:22:27.228 { 00:22:27.228 "name": "BaseBdev4", 00:22:27.228 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:27.228 "is_configured": true, 00:22:27.228 "data_offset": 2048, 00:22:27.228 "data_size": 63488 00:22:27.228 } 00:22:27.228 ] 00:22:27.228 }' 00:22:27.228 16:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.228 16:39:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.795 16:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.795 16:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:28.054 16:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:28.054 16:39:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:28.312 [2024-07-24 16:39:24.986752] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.312 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.570 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.570 "name": "Existed_Raid", 00:22:28.570 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:28.570 "strip_size_kb": 64, 00:22:28.570 "state": "configuring", 00:22:28.570 "raid_level": "raid0", 00:22:28.570 "superblock": true, 00:22:28.570 "num_base_bdevs": 4, 00:22:28.570 "num_base_bdevs_discovered": 3, 00:22:28.570 "num_base_bdevs_operational": 4, 00:22:28.570 "base_bdevs_list": [ 00:22:28.570 { 00:22:28.570 "name": "BaseBdev1", 00:22:28.570 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:28.570 "is_configured": true, 00:22:28.570 "data_offset": 2048, 00:22:28.570 "data_size": 63488 00:22:28.570 }, 00:22:28.570 { 00:22:28.570 "name": null, 00:22:28.570 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:28.570 "is_configured": false, 00:22:28.570 "data_offset": 2048, 00:22:28.570 "data_size": 63488 00:22:28.570 }, 00:22:28.570 { 00:22:28.570 "name": "BaseBdev3", 00:22:28.570 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:28.570 "is_configured": true, 00:22:28.570 "data_offset": 2048, 00:22:28.570 "data_size": 63488 00:22:28.570 }, 00:22:28.570 { 00:22:28.570 "name": "BaseBdev4", 00:22:28.570 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:28.570 "is_configured": true, 00:22:28.570 "data_offset": 2048, 00:22:28.570 "data_size": 63488 00:22:28.570 } 00:22:28.570 ] 00:22:28.570 }' 00:22:28.570 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.570 16:39:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.136 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.136 16:39:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:29.394 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:29.394 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:29.394 [2024-07-24 16:39:26.226184] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.653 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.911 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.911 "name": "Existed_Raid", 00:22:29.911 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:29.911 "strip_size_kb": 64, 00:22:29.911 "state": "configuring", 00:22:29.911 "raid_level": "raid0", 00:22:29.911 "superblock": true, 00:22:29.911 "num_base_bdevs": 4, 00:22:29.911 "num_base_bdevs_discovered": 2, 00:22:29.911 "num_base_bdevs_operational": 4, 00:22:29.911 "base_bdevs_list": [ 00:22:29.911 { 00:22:29.911 "name": null, 00:22:29.911 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:29.911 "is_configured": false, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 }, 00:22:29.911 { 00:22:29.911 "name": null, 00:22:29.911 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:29.911 "is_configured": false, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 }, 00:22:29.911 { 00:22:29.911 "name": "BaseBdev3", 00:22:29.911 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:29.911 "is_configured": true, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 }, 00:22:29.911 { 00:22:29.911 "name": "BaseBdev4", 00:22:29.911 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:29.911 "is_configured": true, 00:22:29.911 "data_offset": 2048, 00:22:29.911 "data_size": 63488 00:22:29.911 } 00:22:29.911 ] 00:22:29.911 }' 00:22:29.911 16:39:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.911 16:39:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:30.476 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.476 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:30.734 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:30.734 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:30.993 [2024-07-24 16:39:27.625675] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.993 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.251 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.251 "name": "Existed_Raid", 00:22:31.251 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:31.252 "strip_size_kb": 64, 00:22:31.252 "state": "configuring", 00:22:31.252 "raid_level": "raid0", 00:22:31.252 "superblock": true, 00:22:31.252 "num_base_bdevs": 4, 00:22:31.252 "num_base_bdevs_discovered": 3, 00:22:31.252 "num_base_bdevs_operational": 4, 00:22:31.252 "base_bdevs_list": [ 00:22:31.252 { 00:22:31.252 "name": null, 00:22:31.252 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:31.252 "is_configured": false, 00:22:31.252 "data_offset": 2048, 00:22:31.252 "data_size": 63488 00:22:31.252 }, 00:22:31.252 { 00:22:31.252 "name": "BaseBdev2", 00:22:31.252 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:31.252 "is_configured": true, 00:22:31.252 "data_offset": 2048, 00:22:31.252 "data_size": 63488 00:22:31.252 }, 00:22:31.252 { 00:22:31.252 "name": "BaseBdev3", 00:22:31.252 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:31.252 "is_configured": true, 00:22:31.252 "data_offset": 2048, 00:22:31.252 "data_size": 63488 00:22:31.252 }, 00:22:31.252 { 00:22:31.252 "name": "BaseBdev4", 00:22:31.252 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:31.252 "is_configured": true, 00:22:31.252 "data_offset": 2048, 00:22:31.252 "data_size": 63488 00:22:31.252 } 00:22:31.252 ] 00:22:31.252 }' 00:22:31.252 16:39:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.252 16:39:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.818 16:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.818 16:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:31.818 16:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:31.818 16:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.818 16:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:32.076 16:39:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 797583b8-2500-4464-a75b-d43b90cfb59e 00:22:32.334 [2024-07-24 16:39:29.135781] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:32.335 [2024-07-24 16:39:29.136032] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:22:32.335 [2024-07-24 16:39:29.136051] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:32.335 [2024-07-24 16:39:29.136381] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:22:32.335 [2024-07-24 16:39:29.136587] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:22:32.335 [2024-07-24 16:39:29.136604] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:22:32.335 [2024-07-24 16:39:29.136773] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.335 NewBaseBdev 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:32.335 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:32.901 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:33.159 [ 00:22:33.159 { 00:22:33.159 "name": "NewBaseBdev", 00:22:33.159 "aliases": [ 00:22:33.159 "797583b8-2500-4464-a75b-d43b90cfb59e" 00:22:33.159 ], 00:22:33.159 "product_name": "Malloc disk", 00:22:33.159 "block_size": 512, 00:22:33.159 "num_blocks": 65536, 00:22:33.159 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:33.159 "assigned_rate_limits": { 00:22:33.159 "rw_ios_per_sec": 0, 00:22:33.159 "rw_mbytes_per_sec": 0, 00:22:33.159 "r_mbytes_per_sec": 0, 00:22:33.159 "w_mbytes_per_sec": 0 00:22:33.159 }, 00:22:33.159 "claimed": true, 00:22:33.159 "claim_type": "exclusive_write", 00:22:33.159 "zoned": false, 00:22:33.159 "supported_io_types": { 00:22:33.159 "read": true, 00:22:33.159 "write": true, 00:22:33.159 "unmap": true, 00:22:33.159 "flush": true, 00:22:33.159 "reset": true, 00:22:33.159 "nvme_admin": false, 00:22:33.159 "nvme_io": false, 00:22:33.159 "nvme_io_md": false, 00:22:33.159 "write_zeroes": true, 00:22:33.159 "zcopy": true, 00:22:33.159 "get_zone_info": false, 00:22:33.159 "zone_management": false, 00:22:33.159 "zone_append": false, 00:22:33.159 "compare": false, 00:22:33.159 "compare_and_write": false, 00:22:33.159 "abort": true, 00:22:33.159 "seek_hole": false, 00:22:33.159 "seek_data": false, 00:22:33.159 "copy": true, 00:22:33.159 "nvme_iov_md": false 00:22:33.159 }, 00:22:33.159 "memory_domains": [ 00:22:33.159 { 00:22:33.159 "dma_device_id": "system", 00:22:33.159 "dma_device_type": 1 00:22:33.159 }, 00:22:33.159 { 00:22:33.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.159 "dma_device_type": 2 00:22:33.159 } 00:22:33.159 ], 00:22:33.159 "driver_specific": {} 00:22:33.159 } 00:22:33.159 ] 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.159 16:39:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.417 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.417 "name": "Existed_Raid", 00:22:33.417 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:33.417 "strip_size_kb": 64, 00:22:33.417 "state": "online", 00:22:33.417 "raid_level": "raid0", 00:22:33.417 "superblock": true, 00:22:33.417 "num_base_bdevs": 4, 00:22:33.417 "num_base_bdevs_discovered": 4, 00:22:33.417 "num_base_bdevs_operational": 4, 00:22:33.417 "base_bdevs_list": [ 00:22:33.417 { 00:22:33.417 "name": "NewBaseBdev", 00:22:33.417 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:33.417 "is_configured": true, 00:22:33.417 "data_offset": 2048, 00:22:33.417 "data_size": 63488 00:22:33.417 }, 00:22:33.417 { 00:22:33.417 "name": "BaseBdev2", 00:22:33.417 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:33.417 "is_configured": true, 00:22:33.417 "data_offset": 2048, 00:22:33.417 "data_size": 63488 00:22:33.417 }, 00:22:33.417 { 00:22:33.417 "name": "BaseBdev3", 00:22:33.417 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:33.417 "is_configured": true, 00:22:33.417 "data_offset": 2048, 00:22:33.417 "data_size": 63488 00:22:33.417 }, 00:22:33.417 { 00:22:33.417 "name": "BaseBdev4", 00:22:33.417 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:33.417 "is_configured": true, 00:22:33.417 "data_offset": 2048, 00:22:33.417 "data_size": 63488 00:22:33.417 } 00:22:33.417 ] 00:22:33.417 }' 00:22:33.417 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.417 16:39:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:33.983 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:34.242 [2024-07-24 16:39:30.885034] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.242 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:34.242 "name": "Existed_Raid", 00:22:34.242 "aliases": [ 00:22:34.242 "e97a012a-b79f-4fc8-8b3e-027cabb03299" 00:22:34.242 ], 00:22:34.242 "product_name": "Raid Volume", 00:22:34.242 "block_size": 512, 00:22:34.242 "num_blocks": 253952, 00:22:34.242 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:34.242 "assigned_rate_limits": { 00:22:34.242 "rw_ios_per_sec": 0, 00:22:34.242 "rw_mbytes_per_sec": 0, 00:22:34.242 "r_mbytes_per_sec": 0, 00:22:34.242 "w_mbytes_per_sec": 0 00:22:34.242 }, 00:22:34.242 "claimed": false, 00:22:34.242 "zoned": false, 00:22:34.242 "supported_io_types": { 00:22:34.242 "read": true, 00:22:34.242 "write": true, 00:22:34.242 "unmap": true, 00:22:34.242 "flush": true, 00:22:34.242 "reset": true, 00:22:34.242 "nvme_admin": false, 00:22:34.242 "nvme_io": false, 00:22:34.242 "nvme_io_md": false, 00:22:34.242 "write_zeroes": true, 00:22:34.242 "zcopy": false, 00:22:34.242 "get_zone_info": false, 00:22:34.242 "zone_management": false, 00:22:34.242 "zone_append": false, 00:22:34.242 "compare": false, 00:22:34.242 "compare_and_write": false, 00:22:34.242 "abort": false, 00:22:34.242 "seek_hole": false, 00:22:34.242 "seek_data": false, 00:22:34.242 "copy": false, 00:22:34.242 "nvme_iov_md": false 00:22:34.242 }, 00:22:34.242 "memory_domains": [ 00:22:34.242 { 00:22:34.242 "dma_device_id": "system", 00:22:34.242 "dma_device_type": 1 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.242 "dma_device_type": 2 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "system", 00:22:34.242 "dma_device_type": 1 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.242 "dma_device_type": 2 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "system", 00:22:34.242 "dma_device_type": 1 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.242 "dma_device_type": 2 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "system", 00:22:34.242 "dma_device_type": 1 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.242 "dma_device_type": 2 00:22:34.242 } 00:22:34.242 ], 00:22:34.242 "driver_specific": { 00:22:34.242 "raid": { 00:22:34.242 "uuid": "e97a012a-b79f-4fc8-8b3e-027cabb03299", 00:22:34.242 "strip_size_kb": 64, 00:22:34.242 "state": "online", 00:22:34.242 "raid_level": "raid0", 00:22:34.242 "superblock": true, 00:22:34.242 "num_base_bdevs": 4, 00:22:34.242 "num_base_bdevs_discovered": 4, 00:22:34.242 "num_base_bdevs_operational": 4, 00:22:34.242 "base_bdevs_list": [ 00:22:34.242 { 00:22:34.242 "name": "NewBaseBdev", 00:22:34.242 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:34.242 "is_configured": true, 00:22:34.242 "data_offset": 2048, 00:22:34.242 "data_size": 63488 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "name": "BaseBdev2", 00:22:34.242 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:34.242 "is_configured": true, 00:22:34.242 "data_offset": 2048, 00:22:34.242 "data_size": 63488 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "name": "BaseBdev3", 00:22:34.242 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:34.242 "is_configured": true, 00:22:34.242 "data_offset": 2048, 00:22:34.242 "data_size": 63488 00:22:34.242 }, 00:22:34.242 { 00:22:34.242 "name": "BaseBdev4", 00:22:34.242 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:34.242 "is_configured": true, 00:22:34.242 "data_offset": 2048, 00:22:34.242 "data_size": 63488 00:22:34.242 } 00:22:34.242 ] 00:22:34.242 } 00:22:34.242 } 00:22:34.242 }' 00:22:34.242 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.242 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:34.242 BaseBdev2 00:22:34.242 BaseBdev3 00:22:34.242 BaseBdev4' 00:22:34.242 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.242 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.242 16:39:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:34.501 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.501 "name": "NewBaseBdev", 00:22:34.501 "aliases": [ 00:22:34.501 "797583b8-2500-4464-a75b-d43b90cfb59e" 00:22:34.501 ], 00:22:34.501 "product_name": "Malloc disk", 00:22:34.501 "block_size": 512, 00:22:34.501 "num_blocks": 65536, 00:22:34.501 "uuid": "797583b8-2500-4464-a75b-d43b90cfb59e", 00:22:34.501 "assigned_rate_limits": { 00:22:34.501 "rw_ios_per_sec": 0, 00:22:34.501 "rw_mbytes_per_sec": 0, 00:22:34.501 "r_mbytes_per_sec": 0, 00:22:34.501 "w_mbytes_per_sec": 0 00:22:34.501 }, 00:22:34.501 "claimed": true, 00:22:34.501 "claim_type": "exclusive_write", 00:22:34.501 "zoned": false, 00:22:34.501 "supported_io_types": { 00:22:34.502 "read": true, 00:22:34.502 "write": true, 00:22:34.502 "unmap": true, 00:22:34.502 "flush": true, 00:22:34.502 "reset": true, 00:22:34.502 "nvme_admin": false, 00:22:34.502 "nvme_io": false, 00:22:34.502 "nvme_io_md": false, 00:22:34.502 "write_zeroes": true, 00:22:34.502 "zcopy": true, 00:22:34.502 "get_zone_info": false, 00:22:34.502 "zone_management": false, 00:22:34.502 "zone_append": false, 00:22:34.502 "compare": false, 00:22:34.502 "compare_and_write": false, 00:22:34.502 "abort": true, 00:22:34.502 "seek_hole": false, 00:22:34.502 "seek_data": false, 00:22:34.502 "copy": true, 00:22:34.502 "nvme_iov_md": false 00:22:34.502 }, 00:22:34.502 "memory_domains": [ 00:22:34.502 { 00:22:34.502 "dma_device_id": "system", 00:22:34.502 "dma_device_type": 1 00:22:34.502 }, 00:22:34.502 { 00:22:34.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.502 "dma_device_type": 2 00:22:34.502 } 00:22:34.502 ], 00:22:34.502 "driver_specific": {} 00:22:34.502 }' 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.502 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:34.760 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.018 "name": "BaseBdev2", 00:22:35.018 "aliases": [ 00:22:35.018 "5f937e5e-3a75-457a-a1ab-a2813a91d009" 00:22:35.018 ], 00:22:35.018 "product_name": "Malloc disk", 00:22:35.018 "block_size": 512, 00:22:35.018 "num_blocks": 65536, 00:22:35.018 "uuid": "5f937e5e-3a75-457a-a1ab-a2813a91d009", 00:22:35.018 "assigned_rate_limits": { 00:22:35.018 "rw_ios_per_sec": 0, 00:22:35.018 "rw_mbytes_per_sec": 0, 00:22:35.018 "r_mbytes_per_sec": 0, 00:22:35.018 "w_mbytes_per_sec": 0 00:22:35.018 }, 00:22:35.018 "claimed": true, 00:22:35.018 "claim_type": "exclusive_write", 00:22:35.018 "zoned": false, 00:22:35.018 "supported_io_types": { 00:22:35.018 "read": true, 00:22:35.018 "write": true, 00:22:35.018 "unmap": true, 00:22:35.018 "flush": true, 00:22:35.018 "reset": true, 00:22:35.018 "nvme_admin": false, 00:22:35.018 "nvme_io": false, 00:22:35.018 "nvme_io_md": false, 00:22:35.018 "write_zeroes": true, 00:22:35.018 "zcopy": true, 00:22:35.018 "get_zone_info": false, 00:22:35.018 "zone_management": false, 00:22:35.018 "zone_append": false, 00:22:35.018 "compare": false, 00:22:35.018 "compare_and_write": false, 00:22:35.018 "abort": true, 00:22:35.018 "seek_hole": false, 00:22:35.018 "seek_data": false, 00:22:35.018 "copy": true, 00:22:35.018 "nvme_iov_md": false 00:22:35.018 }, 00:22:35.018 "memory_domains": [ 00:22:35.018 { 00:22:35.018 "dma_device_id": "system", 00:22:35.018 "dma_device_type": 1 00:22:35.018 }, 00:22:35.018 { 00:22:35.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.018 "dma_device_type": 2 00:22:35.018 } 00:22:35.018 ], 00:22:35.018 "driver_specific": {} 00:22:35.018 }' 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.018 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:35.277 16:39:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.535 "name": "BaseBdev3", 00:22:35.535 "aliases": [ 00:22:35.535 "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c" 00:22:35.535 ], 00:22:35.535 "product_name": "Malloc disk", 00:22:35.535 "block_size": 512, 00:22:35.535 "num_blocks": 65536, 00:22:35.535 "uuid": "8f01826d-193a-40ce-9ab6-aa29bb0b0a0c", 00:22:35.535 "assigned_rate_limits": { 00:22:35.535 "rw_ios_per_sec": 0, 00:22:35.535 "rw_mbytes_per_sec": 0, 00:22:35.535 "r_mbytes_per_sec": 0, 00:22:35.535 "w_mbytes_per_sec": 0 00:22:35.535 }, 00:22:35.535 "claimed": true, 00:22:35.535 "claim_type": "exclusive_write", 00:22:35.535 "zoned": false, 00:22:35.535 "supported_io_types": { 00:22:35.535 "read": true, 00:22:35.535 "write": true, 00:22:35.535 "unmap": true, 00:22:35.535 "flush": true, 00:22:35.535 "reset": true, 00:22:35.535 "nvme_admin": false, 00:22:35.535 "nvme_io": false, 00:22:35.535 "nvme_io_md": false, 00:22:35.535 "write_zeroes": true, 00:22:35.535 "zcopy": true, 00:22:35.535 "get_zone_info": false, 00:22:35.535 "zone_management": false, 00:22:35.535 "zone_append": false, 00:22:35.535 "compare": false, 00:22:35.535 "compare_and_write": false, 00:22:35.535 "abort": true, 00:22:35.535 "seek_hole": false, 00:22:35.535 "seek_data": false, 00:22:35.535 "copy": true, 00:22:35.535 "nvme_iov_md": false 00:22:35.535 }, 00:22:35.535 "memory_domains": [ 00:22:35.535 { 00:22:35.535 "dma_device_id": "system", 00:22:35.535 "dma_device_type": 1 00:22:35.535 }, 00:22:35.535 { 00:22:35.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.535 "dma_device_type": 2 00:22:35.535 } 00:22:35.535 ], 00:22:35.535 "driver_specific": {} 00:22:35.535 }' 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.535 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:35.793 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:36.051 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:36.051 "name": "BaseBdev4", 00:22:36.051 "aliases": [ 00:22:36.051 "cbb6d82b-01e2-48b5-80b8-0970b5a586e6" 00:22:36.051 ], 00:22:36.051 "product_name": "Malloc disk", 00:22:36.051 "block_size": 512, 00:22:36.051 "num_blocks": 65536, 00:22:36.051 "uuid": "cbb6d82b-01e2-48b5-80b8-0970b5a586e6", 00:22:36.051 "assigned_rate_limits": { 00:22:36.051 "rw_ios_per_sec": 0, 00:22:36.051 "rw_mbytes_per_sec": 0, 00:22:36.051 "r_mbytes_per_sec": 0, 00:22:36.051 "w_mbytes_per_sec": 0 00:22:36.051 }, 00:22:36.051 "claimed": true, 00:22:36.051 "claim_type": "exclusive_write", 00:22:36.051 "zoned": false, 00:22:36.051 "supported_io_types": { 00:22:36.051 "read": true, 00:22:36.051 "write": true, 00:22:36.051 "unmap": true, 00:22:36.051 "flush": true, 00:22:36.051 "reset": true, 00:22:36.051 "nvme_admin": false, 00:22:36.051 "nvme_io": false, 00:22:36.051 "nvme_io_md": false, 00:22:36.051 "write_zeroes": true, 00:22:36.051 "zcopy": true, 00:22:36.051 "get_zone_info": false, 00:22:36.051 "zone_management": false, 00:22:36.051 "zone_append": false, 00:22:36.051 "compare": false, 00:22:36.051 "compare_and_write": false, 00:22:36.051 "abort": true, 00:22:36.051 "seek_hole": false, 00:22:36.051 "seek_data": false, 00:22:36.051 "copy": true, 00:22:36.051 "nvme_iov_md": false 00:22:36.051 }, 00:22:36.051 "memory_domains": [ 00:22:36.051 { 00:22:36.051 "dma_device_id": "system", 00:22:36.051 "dma_device_type": 1 00:22:36.051 }, 00:22:36.051 { 00:22:36.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.051 "dma_device_type": 2 00:22:36.051 } 00:22:36.051 ], 00:22:36.051 "driver_specific": {} 00:22:36.051 }' 00:22:36.051 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.051 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.051 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:36.051 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.052 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.310 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:36.310 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.310 16:39:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.310 16:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:36.310 16:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.310 16:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.310 16:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:36.310 16:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:36.876 [2024-07-24 16:39:33.571985] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:36.876 [2024-07-24 16:39:33.572021] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:36.876 [2024-07-24 16:39:33.572107] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:36.876 [2024-07-24 16:39:33.572199] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:36.876 [2024-07-24 16:39:33.572216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1685226 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1685226 ']' 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1685226 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685226 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685226' 00:22:36.876 killing process with pid 1685226 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1685226 00:22:36.876 [2024-07-24 16:39:33.656812] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:36.876 16:39:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1685226 00:22:37.441 [2024-07-24 16:39:34.116972] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:39.390 16:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:39.390 00:22:39.390 real 0m33.796s 00:22:39.390 user 0m59.269s 00:22:39.390 sys 0m5.666s 00:22:39.390 16:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.390 16:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.390 ************************************ 00:22:39.390 END TEST raid_state_function_test_sb 00:22:39.390 ************************************ 00:22:39.390 16:39:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:39.390 16:39:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:39.390 16:39:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.390 16:39:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:39.390 ************************************ 00:22:39.390 START TEST raid_superblock_test 00:22:39.390 ************************************ 00:22:39.390 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:22:39.390 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:22:39.390 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1691863 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1691863 /var/tmp/spdk-raid.sock 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1691863 ']' 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:39.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.391 16:39:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:39.391 [2024-07-24 16:39:36.022675] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:22:39.391 [2024-07-24 16:39:36.022796] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1691863 ] 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:39.391 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:39.391 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:39.391 [2024-07-24 16:39:36.246490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.957 [2024-07-24 16:39:36.536840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.215 [2024-07-24 16:39:36.896326] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.215 [2024-07-24 16:39:36.896357] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:40.474 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:41.050 malloc1 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:41.050 [2024-07-24 16:39:37.864864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:41.050 [2024-07-24 16:39:37.864923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.050 [2024-07-24 16:39:37.864955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:22:41.050 [2024-07-24 16:39:37.864971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.050 [2024-07-24 16:39:37.867733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.050 [2024-07-24 16:39:37.867771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:41.050 pt1 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:41.050 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.051 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.051 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.051 16:39:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:41.618 malloc2 00:22:41.618 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:41.876 [2024-07-24 16:39:38.652808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:41.876 [2024-07-24 16:39:38.652868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.876 [2024-07-24 16:39:38.652896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:22:41.876 [2024-07-24 16:39:38.652912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.876 [2024-07-24 16:39:38.655669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.876 [2024-07-24 16:39:38.655710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:41.876 pt2 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:41.876 16:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:42.443 malloc3 00:22:42.443 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:42.702 [2024-07-24 16:39:39.443641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:42.702 [2024-07-24 16:39:39.443703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.702 [2024-07-24 16:39:39.443732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:22:42.702 [2024-07-24 16:39:39.443748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.702 [2024-07-24 16:39:39.446491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.702 [2024-07-24 16:39:39.446526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:42.702 pt3 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:42.702 16:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:43.269 malloc4 00:22:43.269 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:43.528 [2024-07-24 16:39:40.233786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:43.528 [2024-07-24 16:39:40.233862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.528 [2024-07-24 16:39:40.233891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:22:43.528 [2024-07-24 16:39:40.233906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.528 [2024-07-24 16:39:40.236695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.528 [2024-07-24 16:39:40.236730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:43.528 pt4 00:22:43.528 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:43.528 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:43.528 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:44.093 [2024-07-24 16:39:40.731209] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:44.093 [2024-07-24 16:39:40.733566] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:44.093 [2024-07-24 16:39:40.733652] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:44.093 [2024-07-24 16:39:40.733709] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:44.093 [2024-07-24 16:39:40.733958] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:22:44.093 [2024-07-24 16:39:40.733975] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:44.093 [2024-07-24 16:39:40.734344] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:22:44.093 [2024-07-24 16:39:40.734574] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:22:44.093 [2024-07-24 16:39:40.734593] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:22:44.093 [2024-07-24 16:39:40.734797] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.093 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.350 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.350 "name": "raid_bdev1", 00:22:44.350 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:44.350 "strip_size_kb": 64, 00:22:44.350 "state": "online", 00:22:44.350 "raid_level": "raid0", 00:22:44.350 "superblock": true, 00:22:44.350 "num_base_bdevs": 4, 00:22:44.350 "num_base_bdevs_discovered": 4, 00:22:44.350 "num_base_bdevs_operational": 4, 00:22:44.350 "base_bdevs_list": [ 00:22:44.350 { 00:22:44.350 "name": "pt1", 00:22:44.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:44.350 "is_configured": true, 00:22:44.350 "data_offset": 2048, 00:22:44.350 "data_size": 63488 00:22:44.350 }, 00:22:44.350 { 00:22:44.350 "name": "pt2", 00:22:44.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:44.350 "is_configured": true, 00:22:44.350 "data_offset": 2048, 00:22:44.350 "data_size": 63488 00:22:44.350 }, 00:22:44.350 { 00:22:44.350 "name": "pt3", 00:22:44.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:44.350 "is_configured": true, 00:22:44.350 "data_offset": 2048, 00:22:44.350 "data_size": 63488 00:22:44.350 }, 00:22:44.350 { 00:22:44.350 "name": "pt4", 00:22:44.350 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:44.350 "is_configured": true, 00:22:44.350 "data_offset": 2048, 00:22:44.350 "data_size": 63488 00:22:44.350 } 00:22:44.350 ] 00:22:44.350 }' 00:22:44.350 16:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.350 16:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:44.915 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:44.915 [2024-07-24 16:39:41.762342] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:45.173 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:45.173 "name": "raid_bdev1", 00:22:45.173 "aliases": [ 00:22:45.173 "57d127db-9106-4c1b-a6ad-bef5d9eb38e7" 00:22:45.173 ], 00:22:45.173 "product_name": "Raid Volume", 00:22:45.173 "block_size": 512, 00:22:45.173 "num_blocks": 253952, 00:22:45.173 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:45.173 "assigned_rate_limits": { 00:22:45.173 "rw_ios_per_sec": 0, 00:22:45.173 "rw_mbytes_per_sec": 0, 00:22:45.173 "r_mbytes_per_sec": 0, 00:22:45.173 "w_mbytes_per_sec": 0 00:22:45.173 }, 00:22:45.173 "claimed": false, 00:22:45.173 "zoned": false, 00:22:45.173 "supported_io_types": { 00:22:45.173 "read": true, 00:22:45.173 "write": true, 00:22:45.173 "unmap": true, 00:22:45.173 "flush": true, 00:22:45.173 "reset": true, 00:22:45.173 "nvme_admin": false, 00:22:45.173 "nvme_io": false, 00:22:45.173 "nvme_io_md": false, 00:22:45.173 "write_zeroes": true, 00:22:45.173 "zcopy": false, 00:22:45.173 "get_zone_info": false, 00:22:45.173 "zone_management": false, 00:22:45.173 "zone_append": false, 00:22:45.173 "compare": false, 00:22:45.173 "compare_and_write": false, 00:22:45.173 "abort": false, 00:22:45.173 "seek_hole": false, 00:22:45.173 "seek_data": false, 00:22:45.173 "copy": false, 00:22:45.173 "nvme_iov_md": false 00:22:45.173 }, 00:22:45.173 "memory_domains": [ 00:22:45.173 { 00:22:45.173 "dma_device_id": "system", 00:22:45.173 "dma_device_type": 1 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.173 "dma_device_type": 2 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "system", 00:22:45.173 "dma_device_type": 1 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.173 "dma_device_type": 2 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "system", 00:22:45.173 "dma_device_type": 1 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.173 "dma_device_type": 2 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "system", 00:22:45.173 "dma_device_type": 1 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.173 "dma_device_type": 2 00:22:45.173 } 00:22:45.173 ], 00:22:45.173 "driver_specific": { 00:22:45.173 "raid": { 00:22:45.173 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:45.173 "strip_size_kb": 64, 00:22:45.173 "state": "online", 00:22:45.173 "raid_level": "raid0", 00:22:45.173 "superblock": true, 00:22:45.173 "num_base_bdevs": 4, 00:22:45.173 "num_base_bdevs_discovered": 4, 00:22:45.173 "num_base_bdevs_operational": 4, 00:22:45.173 "base_bdevs_list": [ 00:22:45.173 { 00:22:45.173 "name": "pt1", 00:22:45.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:45.173 "is_configured": true, 00:22:45.173 "data_offset": 2048, 00:22:45.173 "data_size": 63488 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "name": "pt2", 00:22:45.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.173 "is_configured": true, 00:22:45.173 "data_offset": 2048, 00:22:45.173 "data_size": 63488 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "name": "pt3", 00:22:45.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:45.173 "is_configured": true, 00:22:45.173 "data_offset": 2048, 00:22:45.173 "data_size": 63488 00:22:45.173 }, 00:22:45.173 { 00:22:45.173 "name": "pt4", 00:22:45.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:45.173 "is_configured": true, 00:22:45.173 "data_offset": 2048, 00:22:45.173 "data_size": 63488 00:22:45.173 } 00:22:45.173 ] 00:22:45.173 } 00:22:45.173 } 00:22:45.173 }' 00:22:45.173 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:45.173 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:45.173 pt2 00:22:45.173 pt3 00:22:45.173 pt4' 00:22:45.173 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:45.173 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:45.173 16:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:45.431 "name": "pt1", 00:22:45.431 "aliases": [ 00:22:45.431 "00000000-0000-0000-0000-000000000001" 00:22:45.431 ], 00:22:45.431 "product_name": "passthru", 00:22:45.431 "block_size": 512, 00:22:45.431 "num_blocks": 65536, 00:22:45.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:45.431 "assigned_rate_limits": { 00:22:45.431 "rw_ios_per_sec": 0, 00:22:45.431 "rw_mbytes_per_sec": 0, 00:22:45.431 "r_mbytes_per_sec": 0, 00:22:45.431 "w_mbytes_per_sec": 0 00:22:45.431 }, 00:22:45.431 "claimed": true, 00:22:45.431 "claim_type": "exclusive_write", 00:22:45.431 "zoned": false, 00:22:45.431 "supported_io_types": { 00:22:45.431 "read": true, 00:22:45.431 "write": true, 00:22:45.431 "unmap": true, 00:22:45.431 "flush": true, 00:22:45.431 "reset": true, 00:22:45.431 "nvme_admin": false, 00:22:45.431 "nvme_io": false, 00:22:45.431 "nvme_io_md": false, 00:22:45.431 "write_zeroes": true, 00:22:45.431 "zcopy": true, 00:22:45.431 "get_zone_info": false, 00:22:45.431 "zone_management": false, 00:22:45.431 "zone_append": false, 00:22:45.431 "compare": false, 00:22:45.431 "compare_and_write": false, 00:22:45.431 "abort": true, 00:22:45.431 "seek_hole": false, 00:22:45.431 "seek_data": false, 00:22:45.431 "copy": true, 00:22:45.431 "nvme_iov_md": false 00:22:45.431 }, 00:22:45.431 "memory_domains": [ 00:22:45.431 { 00:22:45.431 "dma_device_id": "system", 00:22:45.431 "dma_device_type": 1 00:22:45.431 }, 00:22:45.431 { 00:22:45.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.431 "dma_device_type": 2 00:22:45.431 } 00:22:45.431 ], 00:22:45.431 "driver_specific": { 00:22:45.431 "passthru": { 00:22:45.431 "name": "pt1", 00:22:45.431 "base_bdev_name": "malloc1" 00:22:45.431 } 00:22:45.431 } 00:22:45.431 }' 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.431 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:45.689 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:45.947 "name": "pt2", 00:22:45.947 "aliases": [ 00:22:45.947 "00000000-0000-0000-0000-000000000002" 00:22:45.947 ], 00:22:45.947 "product_name": "passthru", 00:22:45.947 "block_size": 512, 00:22:45.947 "num_blocks": 65536, 00:22:45.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:45.947 "assigned_rate_limits": { 00:22:45.947 "rw_ios_per_sec": 0, 00:22:45.947 "rw_mbytes_per_sec": 0, 00:22:45.947 "r_mbytes_per_sec": 0, 00:22:45.947 "w_mbytes_per_sec": 0 00:22:45.947 }, 00:22:45.947 "claimed": true, 00:22:45.947 "claim_type": "exclusive_write", 00:22:45.947 "zoned": false, 00:22:45.947 "supported_io_types": { 00:22:45.947 "read": true, 00:22:45.947 "write": true, 00:22:45.947 "unmap": true, 00:22:45.947 "flush": true, 00:22:45.947 "reset": true, 00:22:45.947 "nvme_admin": false, 00:22:45.947 "nvme_io": false, 00:22:45.947 "nvme_io_md": false, 00:22:45.947 "write_zeroes": true, 00:22:45.947 "zcopy": true, 00:22:45.947 "get_zone_info": false, 00:22:45.947 "zone_management": false, 00:22:45.947 "zone_append": false, 00:22:45.947 "compare": false, 00:22:45.947 "compare_and_write": false, 00:22:45.947 "abort": true, 00:22:45.947 "seek_hole": false, 00:22:45.947 "seek_data": false, 00:22:45.947 "copy": true, 00:22:45.947 "nvme_iov_md": false 00:22:45.947 }, 00:22:45.947 "memory_domains": [ 00:22:45.947 { 00:22:45.947 "dma_device_id": "system", 00:22:45.947 "dma_device_type": 1 00:22:45.947 }, 00:22:45.947 { 00:22:45.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.947 "dma_device_type": 2 00:22:45.947 } 00:22:45.947 ], 00:22:45.947 "driver_specific": { 00:22:45.947 "passthru": { 00:22:45.947 "name": "pt2", 00:22:45.947 "base_bdev_name": "malloc2" 00:22:45.947 } 00:22:45.947 } 00:22:45.947 }' 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:45.947 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:46.206 16:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.464 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:46.464 "name": "pt3", 00:22:46.464 "aliases": [ 00:22:46.464 "00000000-0000-0000-0000-000000000003" 00:22:46.464 ], 00:22:46.464 "product_name": "passthru", 00:22:46.464 "block_size": 512, 00:22:46.464 "num_blocks": 65536, 00:22:46.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:46.464 "assigned_rate_limits": { 00:22:46.464 "rw_ios_per_sec": 0, 00:22:46.464 "rw_mbytes_per_sec": 0, 00:22:46.464 "r_mbytes_per_sec": 0, 00:22:46.464 "w_mbytes_per_sec": 0 00:22:46.464 }, 00:22:46.464 "claimed": true, 00:22:46.464 "claim_type": "exclusive_write", 00:22:46.464 "zoned": false, 00:22:46.464 "supported_io_types": { 00:22:46.464 "read": true, 00:22:46.464 "write": true, 00:22:46.464 "unmap": true, 00:22:46.464 "flush": true, 00:22:46.464 "reset": true, 00:22:46.464 "nvme_admin": false, 00:22:46.464 "nvme_io": false, 00:22:46.464 "nvme_io_md": false, 00:22:46.464 "write_zeroes": true, 00:22:46.464 "zcopy": true, 00:22:46.464 "get_zone_info": false, 00:22:46.464 "zone_management": false, 00:22:46.464 "zone_append": false, 00:22:46.464 "compare": false, 00:22:46.464 "compare_and_write": false, 00:22:46.464 "abort": true, 00:22:46.464 "seek_hole": false, 00:22:46.464 "seek_data": false, 00:22:46.464 "copy": true, 00:22:46.464 "nvme_iov_md": false 00:22:46.464 }, 00:22:46.464 "memory_domains": [ 00:22:46.464 { 00:22:46.464 "dma_device_id": "system", 00:22:46.464 "dma_device_type": 1 00:22:46.464 }, 00:22:46.464 { 00:22:46.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.464 "dma_device_type": 2 00:22:46.464 } 00:22:46.464 ], 00:22:46.464 "driver_specific": { 00:22:46.464 "passthru": { 00:22:46.464 "name": "pt3", 00:22:46.464 "base_bdev_name": "malloc3" 00:22:46.464 } 00:22:46.464 } 00:22:46.464 }' 00:22:46.464 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.464 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.464 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:46.464 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.464 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:46.731 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:46.992 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:46.992 "name": "pt4", 00:22:46.992 "aliases": [ 00:22:46.992 "00000000-0000-0000-0000-000000000004" 00:22:46.992 ], 00:22:46.992 "product_name": "passthru", 00:22:46.992 "block_size": 512, 00:22:46.992 "num_blocks": 65536, 00:22:46.992 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:46.992 "assigned_rate_limits": { 00:22:46.992 "rw_ios_per_sec": 0, 00:22:46.992 "rw_mbytes_per_sec": 0, 00:22:46.992 "r_mbytes_per_sec": 0, 00:22:46.992 "w_mbytes_per_sec": 0 00:22:46.992 }, 00:22:46.992 "claimed": true, 00:22:46.992 "claim_type": "exclusive_write", 00:22:46.992 "zoned": false, 00:22:46.992 "supported_io_types": { 00:22:46.992 "read": true, 00:22:46.992 "write": true, 00:22:46.992 "unmap": true, 00:22:46.992 "flush": true, 00:22:46.992 "reset": true, 00:22:46.992 "nvme_admin": false, 00:22:46.992 "nvme_io": false, 00:22:46.992 "nvme_io_md": false, 00:22:46.992 "write_zeroes": true, 00:22:46.992 "zcopy": true, 00:22:46.992 "get_zone_info": false, 00:22:46.992 "zone_management": false, 00:22:46.992 "zone_append": false, 00:22:46.992 "compare": false, 00:22:46.992 "compare_and_write": false, 00:22:46.992 "abort": true, 00:22:46.992 "seek_hole": false, 00:22:46.992 "seek_data": false, 00:22:46.992 "copy": true, 00:22:46.992 "nvme_iov_md": false 00:22:46.992 }, 00:22:46.992 "memory_domains": [ 00:22:46.992 { 00:22:46.992 "dma_device_id": "system", 00:22:46.992 "dma_device_type": 1 00:22:46.992 }, 00:22:46.992 { 00:22:46.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:46.992 "dma_device_type": 2 00:22:46.992 } 00:22:46.992 ], 00:22:46.992 "driver_specific": { 00:22:46.992 "passthru": { 00:22:46.992 "name": "pt4", 00:22:46.992 "base_bdev_name": "malloc4" 00:22:46.992 } 00:22:46.992 } 00:22:46.992 }' 00:22:46.992 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.992 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:46.992 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:46.992 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:47.249 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:47.249 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:47.249 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:47.249 16:39:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:47.249 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:47.249 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:47.249 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:47.249 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:47.249 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:22:47.249 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:47.507 [2024-07-24 16:39:44.301291] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:47.507 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=57d127db-9106-4c1b-a6ad-bef5d9eb38e7 00:22:47.507 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 57d127db-9106-4c1b-a6ad-bef5d9eb38e7 ']' 00:22:47.507 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:47.764 [2024-07-24 16:39:44.529498] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:47.764 [2024-07-24 16:39:44.529530] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:47.764 [2024-07-24 16:39:44.529619] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:47.764 [2024-07-24 16:39:44.529700] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:47.764 [2024-07-24 16:39:44.529719] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:22:47.764 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.764 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:22:48.026 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:22:48.026 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:22:48.026 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.026 16:39:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:48.284 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.284 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:48.541 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.541 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:48.798 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.798 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:48.798 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:48.798 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:49.056 16:39:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:49.313 [2024-07-24 16:39:46.097765] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:49.313 [2024-07-24 16:39:46.100085] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:49.313 [2024-07-24 16:39:46.100155] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:49.313 [2024-07-24 16:39:46.100201] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:49.313 [2024-07-24 16:39:46.100257] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:49.313 [2024-07-24 16:39:46.100312] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:49.314 [2024-07-24 16:39:46.100342] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:49.314 [2024-07-24 16:39:46.100376] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:49.314 [2024-07-24 16:39:46.100398] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:49.314 [2024-07-24 16:39:46.100415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:22:49.314 request: 00:22:49.314 { 00:22:49.314 "name": "raid_bdev1", 00:22:49.314 "raid_level": "raid0", 00:22:49.314 "base_bdevs": [ 00:22:49.314 "malloc1", 00:22:49.314 "malloc2", 00:22:49.314 "malloc3", 00:22:49.314 "malloc4" 00:22:49.314 ], 00:22:49.314 "strip_size_kb": 64, 00:22:49.314 "superblock": false, 00:22:49.314 "method": "bdev_raid_create", 00:22:49.314 "req_id": 1 00:22:49.314 } 00:22:49.314 Got JSON-RPC error response 00:22:49.314 response: 00:22:49.314 { 00:22:49.314 "code": -17, 00:22:49.314 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:49.314 } 00:22:49.314 16:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:49.314 16:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.314 16:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.314 16:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.314 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.314 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:22:49.572 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:22:49.572 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:22:49.572 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:49.830 [2024-07-24 16:39:46.538869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:49.830 [2024-07-24 16:39:46.538940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.830 [2024-07-24 16:39:46.538963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:22:49.830 [2024-07-24 16:39:46.538981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.830 [2024-07-24 16:39:46.541744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.830 [2024-07-24 16:39:46.541783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:49.830 [2024-07-24 16:39:46.541874] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:49.830 [2024-07-24 16:39:46.541939] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:49.830 pt1 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.830 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.088 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.088 "name": "raid_bdev1", 00:22:50.088 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:50.088 "strip_size_kb": 64, 00:22:50.088 "state": "configuring", 00:22:50.088 "raid_level": "raid0", 00:22:50.088 "superblock": true, 00:22:50.088 "num_base_bdevs": 4, 00:22:50.088 "num_base_bdevs_discovered": 1, 00:22:50.088 "num_base_bdevs_operational": 4, 00:22:50.088 "base_bdevs_list": [ 00:22:50.088 { 00:22:50.088 "name": "pt1", 00:22:50.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:50.088 "is_configured": true, 00:22:50.088 "data_offset": 2048, 00:22:50.088 "data_size": 63488 00:22:50.088 }, 00:22:50.088 { 00:22:50.088 "name": null, 00:22:50.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.088 "is_configured": false, 00:22:50.088 "data_offset": 2048, 00:22:50.088 "data_size": 63488 00:22:50.088 }, 00:22:50.088 { 00:22:50.088 "name": null, 00:22:50.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:50.088 "is_configured": false, 00:22:50.088 "data_offset": 2048, 00:22:50.088 "data_size": 63488 00:22:50.088 }, 00:22:50.088 { 00:22:50.088 "name": null, 00:22:50.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:50.088 "is_configured": false, 00:22:50.088 "data_offset": 2048, 00:22:50.088 "data_size": 63488 00:22:50.088 } 00:22:50.088 ] 00:22:50.088 }' 00:22:50.088 16:39:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.088 16:39:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.653 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:22:50.653 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:50.653 [2024-07-24 16:39:47.477422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:50.653 [2024-07-24 16:39:47.477489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.653 [2024-07-24 16:39:47.477514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:22:50.653 [2024-07-24 16:39:47.477531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.653 [2024-07-24 16:39:47.478087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.653 [2024-07-24 16:39:47.478115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:50.653 [2024-07-24 16:39:47.478216] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:50.653 [2024-07-24 16:39:47.478249] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:50.653 pt2 00:22:50.653 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:50.910 [2024-07-24 16:39:47.641889] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.910 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.168 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.168 "name": "raid_bdev1", 00:22:51.168 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:51.168 "strip_size_kb": 64, 00:22:51.168 "state": "configuring", 00:22:51.168 "raid_level": "raid0", 00:22:51.168 "superblock": true, 00:22:51.168 "num_base_bdevs": 4, 00:22:51.168 "num_base_bdevs_discovered": 1, 00:22:51.168 "num_base_bdevs_operational": 4, 00:22:51.168 "base_bdevs_list": [ 00:22:51.168 { 00:22:51.168 "name": "pt1", 00:22:51.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:51.168 "is_configured": true, 00:22:51.168 "data_offset": 2048, 00:22:51.168 "data_size": 63488 00:22:51.168 }, 00:22:51.168 { 00:22:51.168 "name": null, 00:22:51.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:51.168 "is_configured": false, 00:22:51.168 "data_offset": 2048, 00:22:51.168 "data_size": 63488 00:22:51.168 }, 00:22:51.168 { 00:22:51.168 "name": null, 00:22:51.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:51.168 "is_configured": false, 00:22:51.168 "data_offset": 2048, 00:22:51.168 "data_size": 63488 00:22:51.168 }, 00:22:51.168 { 00:22:51.168 "name": null, 00:22:51.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:51.168 "is_configured": false, 00:22:51.168 "data_offset": 2048, 00:22:51.168 "data_size": 63488 00:22:51.168 } 00:22:51.168 ] 00:22:51.168 }' 00:22:51.168 16:39:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.168 16:39:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:51.734 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:22:51.734 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:51.734 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:51.734 [2024-07-24 16:39:48.564343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:51.734 [2024-07-24 16:39:48.564405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.734 [2024-07-24 16:39:48.564431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:22:51.734 [2024-07-24 16:39:48.564446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.734 [2024-07-24 16:39:48.565013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.734 [2024-07-24 16:39:48.565036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:51.734 [2024-07-24 16:39:48.565129] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:51.734 [2024-07-24 16:39:48.565165] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:51.734 pt2 00:22:51.734 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:51.734 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:51.734 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:51.992 [2024-07-24 16:39:48.724825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:51.992 [2024-07-24 16:39:48.724872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.992 [2024-07-24 16:39:48.724901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:22:51.992 [2024-07-24 16:39:48.724917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.992 [2024-07-24 16:39:48.725443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.992 [2024-07-24 16:39:48.725469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:51.992 [2024-07-24 16:39:48.725550] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:51.992 [2024-07-24 16:39:48.725574] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:51.992 pt3 00:22:51.992 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:51.992 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:51.992 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:52.250 [2024-07-24 16:39:48.897263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:52.250 [2024-07-24 16:39:48.897315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.250 [2024-07-24 16:39:48.897339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:22:52.250 [2024-07-24 16:39:48.897354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.250 [2024-07-24 16:39:48.897831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.250 [2024-07-24 16:39:48.897856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:52.250 [2024-07-24 16:39:48.897941] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:52.250 [2024-07-24 16:39:48.897964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:52.250 [2024-07-24 16:39:48.898159] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:22:52.250 [2024-07-24 16:39:48.898174] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:52.250 [2024-07-24 16:39:48.898473] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:22:52.250 [2024-07-24 16:39:48.898666] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:22:52.250 [2024-07-24 16:39:48.898685] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:22:52.250 [2024-07-24 16:39:48.898841] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.250 pt4 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.250 16:39:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.250 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.250 "name": "raid_bdev1", 00:22:52.250 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:52.250 "strip_size_kb": 64, 00:22:52.250 "state": "online", 00:22:52.250 "raid_level": "raid0", 00:22:52.250 "superblock": true, 00:22:52.250 "num_base_bdevs": 4, 00:22:52.250 "num_base_bdevs_discovered": 4, 00:22:52.251 "num_base_bdevs_operational": 4, 00:22:52.251 "base_bdevs_list": [ 00:22:52.251 { 00:22:52.251 "name": "pt1", 00:22:52.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:52.251 "is_configured": true, 00:22:52.251 "data_offset": 2048, 00:22:52.251 "data_size": 63488 00:22:52.251 }, 00:22:52.251 { 00:22:52.251 "name": "pt2", 00:22:52.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:52.251 "is_configured": true, 00:22:52.251 "data_offset": 2048, 00:22:52.251 "data_size": 63488 00:22:52.251 }, 00:22:52.251 { 00:22:52.251 "name": "pt3", 00:22:52.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:52.251 "is_configured": true, 00:22:52.251 "data_offset": 2048, 00:22:52.251 "data_size": 63488 00:22:52.251 }, 00:22:52.251 { 00:22:52.251 "name": "pt4", 00:22:52.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:52.251 "is_configured": true, 00:22:52.251 "data_offset": 2048, 00:22:52.251 "data_size": 63488 00:22:52.251 } 00:22:52.251 ] 00:22:52.251 }' 00:22:52.251 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.251 16:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.814 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:22:52.814 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:52.814 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:52.814 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:52.814 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:52.815 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:52.815 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:52.815 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:53.112 [2024-07-24 16:39:49.864286] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.112 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:53.112 "name": "raid_bdev1", 00:22:53.112 "aliases": [ 00:22:53.112 "57d127db-9106-4c1b-a6ad-bef5d9eb38e7" 00:22:53.112 ], 00:22:53.112 "product_name": "Raid Volume", 00:22:53.112 "block_size": 512, 00:22:53.112 "num_blocks": 253952, 00:22:53.112 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:53.112 "assigned_rate_limits": { 00:22:53.112 "rw_ios_per_sec": 0, 00:22:53.112 "rw_mbytes_per_sec": 0, 00:22:53.112 "r_mbytes_per_sec": 0, 00:22:53.112 "w_mbytes_per_sec": 0 00:22:53.112 }, 00:22:53.112 "claimed": false, 00:22:53.112 "zoned": false, 00:22:53.112 "supported_io_types": { 00:22:53.112 "read": true, 00:22:53.112 "write": true, 00:22:53.112 "unmap": true, 00:22:53.112 "flush": true, 00:22:53.112 "reset": true, 00:22:53.112 "nvme_admin": false, 00:22:53.112 "nvme_io": false, 00:22:53.112 "nvme_io_md": false, 00:22:53.112 "write_zeroes": true, 00:22:53.112 "zcopy": false, 00:22:53.112 "get_zone_info": false, 00:22:53.112 "zone_management": false, 00:22:53.112 "zone_append": false, 00:22:53.112 "compare": false, 00:22:53.112 "compare_and_write": false, 00:22:53.112 "abort": false, 00:22:53.112 "seek_hole": false, 00:22:53.112 "seek_data": false, 00:22:53.112 "copy": false, 00:22:53.112 "nvme_iov_md": false 00:22:53.112 }, 00:22:53.112 "memory_domains": [ 00:22:53.112 { 00:22:53.112 "dma_device_id": "system", 00:22:53.112 "dma_device_type": 1 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.112 "dma_device_type": 2 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "system", 00:22:53.112 "dma_device_type": 1 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.112 "dma_device_type": 2 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "system", 00:22:53.112 "dma_device_type": 1 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.112 "dma_device_type": 2 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "system", 00:22:53.112 "dma_device_type": 1 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.112 "dma_device_type": 2 00:22:53.112 } 00:22:53.112 ], 00:22:53.112 "driver_specific": { 00:22:53.112 "raid": { 00:22:53.112 "uuid": "57d127db-9106-4c1b-a6ad-bef5d9eb38e7", 00:22:53.112 "strip_size_kb": 64, 00:22:53.112 "state": "online", 00:22:53.112 "raid_level": "raid0", 00:22:53.112 "superblock": true, 00:22:53.112 "num_base_bdevs": 4, 00:22:53.112 "num_base_bdevs_discovered": 4, 00:22:53.112 "num_base_bdevs_operational": 4, 00:22:53.112 "base_bdevs_list": [ 00:22:53.112 { 00:22:53.112 "name": "pt1", 00:22:53.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:53.112 "is_configured": true, 00:22:53.112 "data_offset": 2048, 00:22:53.112 "data_size": 63488 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "name": "pt2", 00:22:53.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.112 "is_configured": true, 00:22:53.112 "data_offset": 2048, 00:22:53.112 "data_size": 63488 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "name": "pt3", 00:22:53.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:53.112 "is_configured": true, 00:22:53.112 "data_offset": 2048, 00:22:53.112 "data_size": 63488 00:22:53.112 }, 00:22:53.112 { 00:22:53.112 "name": "pt4", 00:22:53.112 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:53.112 "is_configured": true, 00:22:53.112 "data_offset": 2048, 00:22:53.112 "data_size": 63488 00:22:53.112 } 00:22:53.112 ] 00:22:53.112 } 00:22:53.112 } 00:22:53.112 }' 00:22:53.112 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:53.112 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:53.112 pt2 00:22:53.112 pt3 00:22:53.112 pt4' 00:22:53.112 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.112 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:53.112 16:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.391 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.391 "name": "pt1", 00:22:53.391 "aliases": [ 00:22:53.391 "00000000-0000-0000-0000-000000000001" 00:22:53.391 ], 00:22:53.391 "product_name": "passthru", 00:22:53.391 "block_size": 512, 00:22:53.391 "num_blocks": 65536, 00:22:53.391 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:53.391 "assigned_rate_limits": { 00:22:53.391 "rw_ios_per_sec": 0, 00:22:53.391 "rw_mbytes_per_sec": 0, 00:22:53.391 "r_mbytes_per_sec": 0, 00:22:53.391 "w_mbytes_per_sec": 0 00:22:53.391 }, 00:22:53.391 "claimed": true, 00:22:53.391 "claim_type": "exclusive_write", 00:22:53.391 "zoned": false, 00:22:53.391 "supported_io_types": { 00:22:53.391 "read": true, 00:22:53.391 "write": true, 00:22:53.391 "unmap": true, 00:22:53.391 "flush": true, 00:22:53.391 "reset": true, 00:22:53.391 "nvme_admin": false, 00:22:53.391 "nvme_io": false, 00:22:53.391 "nvme_io_md": false, 00:22:53.391 "write_zeroes": true, 00:22:53.391 "zcopy": true, 00:22:53.391 "get_zone_info": false, 00:22:53.391 "zone_management": false, 00:22:53.391 "zone_append": false, 00:22:53.391 "compare": false, 00:22:53.391 "compare_and_write": false, 00:22:53.391 "abort": true, 00:22:53.391 "seek_hole": false, 00:22:53.391 "seek_data": false, 00:22:53.391 "copy": true, 00:22:53.391 "nvme_iov_md": false 00:22:53.391 }, 00:22:53.391 "memory_domains": [ 00:22:53.391 { 00:22:53.391 "dma_device_id": "system", 00:22:53.391 "dma_device_type": 1 00:22:53.391 }, 00:22:53.391 { 00:22:53.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.391 "dma_device_type": 2 00:22:53.391 } 00:22:53.391 ], 00:22:53.391 "driver_specific": { 00:22:53.391 "passthru": { 00:22:53.391 "name": "pt1", 00:22:53.391 "base_bdev_name": "malloc1" 00:22:53.391 } 00:22:53.391 } 00:22:53.391 }' 00:22:53.391 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.391 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.649 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:53.907 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.907 "name": "pt2", 00:22:53.907 "aliases": [ 00:22:53.907 "00000000-0000-0000-0000-000000000002" 00:22:53.907 ], 00:22:53.907 "product_name": "passthru", 00:22:53.907 "block_size": 512, 00:22:53.907 "num_blocks": 65536, 00:22:53.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:53.907 "assigned_rate_limits": { 00:22:53.907 "rw_ios_per_sec": 0, 00:22:53.907 "rw_mbytes_per_sec": 0, 00:22:53.907 "r_mbytes_per_sec": 0, 00:22:53.907 "w_mbytes_per_sec": 0 00:22:53.907 }, 00:22:53.907 "claimed": true, 00:22:53.907 "claim_type": "exclusive_write", 00:22:53.907 "zoned": false, 00:22:53.907 "supported_io_types": { 00:22:53.907 "read": true, 00:22:53.907 "write": true, 00:22:53.907 "unmap": true, 00:22:53.907 "flush": true, 00:22:53.907 "reset": true, 00:22:53.907 "nvme_admin": false, 00:22:53.907 "nvme_io": false, 00:22:53.907 "nvme_io_md": false, 00:22:53.907 "write_zeroes": true, 00:22:53.907 "zcopy": true, 00:22:53.907 "get_zone_info": false, 00:22:53.907 "zone_management": false, 00:22:53.907 "zone_append": false, 00:22:53.907 "compare": false, 00:22:53.907 "compare_and_write": false, 00:22:53.907 "abort": true, 00:22:53.907 "seek_hole": false, 00:22:53.907 "seek_data": false, 00:22:53.907 "copy": true, 00:22:53.907 "nvme_iov_md": false 00:22:53.907 }, 00:22:53.907 "memory_domains": [ 00:22:53.907 { 00:22:53.907 "dma_device_id": "system", 00:22:53.907 "dma_device_type": 1 00:22:53.907 }, 00:22:53.907 { 00:22:53.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.907 "dma_device_type": 2 00:22:53.907 } 00:22:53.907 ], 00:22:53.907 "driver_specific": { 00:22:53.907 "passthru": { 00:22:53.907 "name": "pt2", 00:22:53.907 "base_bdev_name": "malloc2" 00:22:53.907 } 00:22:53.907 } 00:22:53.907 }' 00:22:53.907 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.907 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.165 16:39:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.424 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.424 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.424 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:54.424 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.424 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.424 "name": "pt3", 00:22:54.424 "aliases": [ 00:22:54.424 "00000000-0000-0000-0000-000000000003" 00:22:54.424 ], 00:22:54.424 "product_name": "passthru", 00:22:54.424 "block_size": 512, 00:22:54.424 "num_blocks": 65536, 00:22:54.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:54.424 "assigned_rate_limits": { 00:22:54.424 "rw_ios_per_sec": 0, 00:22:54.424 "rw_mbytes_per_sec": 0, 00:22:54.424 "r_mbytes_per_sec": 0, 00:22:54.424 "w_mbytes_per_sec": 0 00:22:54.424 }, 00:22:54.424 "claimed": true, 00:22:54.424 "claim_type": "exclusive_write", 00:22:54.424 "zoned": false, 00:22:54.424 "supported_io_types": { 00:22:54.424 "read": true, 00:22:54.424 "write": true, 00:22:54.424 "unmap": true, 00:22:54.424 "flush": true, 00:22:54.424 "reset": true, 00:22:54.424 "nvme_admin": false, 00:22:54.424 "nvme_io": false, 00:22:54.424 "nvme_io_md": false, 00:22:54.424 "write_zeroes": true, 00:22:54.424 "zcopy": true, 00:22:54.424 "get_zone_info": false, 00:22:54.424 "zone_management": false, 00:22:54.424 "zone_append": false, 00:22:54.424 "compare": false, 00:22:54.424 "compare_and_write": false, 00:22:54.424 "abort": true, 00:22:54.424 "seek_hole": false, 00:22:54.424 "seek_data": false, 00:22:54.424 "copy": true, 00:22:54.424 "nvme_iov_md": false 00:22:54.424 }, 00:22:54.424 "memory_domains": [ 00:22:54.424 { 00:22:54.424 "dma_device_id": "system", 00:22:54.424 "dma_device_type": 1 00:22:54.424 }, 00:22:54.424 { 00:22:54.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.424 "dma_device_type": 2 00:22:54.424 } 00:22:54.424 ], 00:22:54.424 "driver_specific": { 00:22:54.424 "passthru": { 00:22:54.424 "name": "pt3", 00:22:54.424 "base_bdev_name": "malloc3" 00:22:54.424 } 00:22:54.424 } 00:22:54.424 }' 00:22:54.424 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.683 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.941 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.941 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.941 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:54.941 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.199 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.199 "name": "pt4", 00:22:55.199 "aliases": [ 00:22:55.199 "00000000-0000-0000-0000-000000000004" 00:22:55.199 ], 00:22:55.199 "product_name": "passthru", 00:22:55.199 "block_size": 512, 00:22:55.199 "num_blocks": 65536, 00:22:55.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:55.199 "assigned_rate_limits": { 00:22:55.199 "rw_ios_per_sec": 0, 00:22:55.199 "rw_mbytes_per_sec": 0, 00:22:55.199 "r_mbytes_per_sec": 0, 00:22:55.199 "w_mbytes_per_sec": 0 00:22:55.199 }, 00:22:55.199 "claimed": true, 00:22:55.199 "claim_type": "exclusive_write", 00:22:55.199 "zoned": false, 00:22:55.199 "supported_io_types": { 00:22:55.199 "read": true, 00:22:55.199 "write": true, 00:22:55.199 "unmap": true, 00:22:55.199 "flush": true, 00:22:55.199 "reset": true, 00:22:55.199 "nvme_admin": false, 00:22:55.200 "nvme_io": false, 00:22:55.200 "nvme_io_md": false, 00:22:55.200 "write_zeroes": true, 00:22:55.200 "zcopy": true, 00:22:55.200 "get_zone_info": false, 00:22:55.200 "zone_management": false, 00:22:55.200 "zone_append": false, 00:22:55.200 "compare": false, 00:22:55.200 "compare_and_write": false, 00:22:55.200 "abort": true, 00:22:55.200 "seek_hole": false, 00:22:55.200 "seek_data": false, 00:22:55.200 "copy": true, 00:22:55.200 "nvme_iov_md": false 00:22:55.200 }, 00:22:55.200 "memory_domains": [ 00:22:55.200 { 00:22:55.200 "dma_device_id": "system", 00:22:55.200 "dma_device_type": 1 00:22:55.200 }, 00:22:55.200 { 00:22:55.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.200 "dma_device_type": 2 00:22:55.200 } 00:22:55.200 ], 00:22:55.200 "driver_specific": { 00:22:55.200 "passthru": { 00:22:55.200 "name": "pt4", 00:22:55.200 "base_bdev_name": "malloc4" 00:22:55.200 } 00:22:55.200 } 00:22:55.200 }' 00:22:55.200 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.200 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.200 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.200 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.200 16:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.200 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.200 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.200 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.458 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.458 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.458 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.458 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.458 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:55.458 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:22:55.718 [2024-07-24 16:39:52.387096] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 57d127db-9106-4c1b-a6ad-bef5d9eb38e7 '!=' 57d127db-9106-4c1b-a6ad-bef5d9eb38e7 ']' 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1691863 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1691863 ']' 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1691863 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1691863 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1691863' 00:22:55.718 killing process with pid 1691863 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1691863 00:22:55.718 [2024-07-24 16:39:52.467453] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:55.718 [2024-07-24 16:39:52.467544] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.718 16:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1691863 00:22:55.718 [2024-07-24 16:39:52.467628] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.718 [2024-07-24 16:39:52.467645] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:22:56.287 [2024-07-24 16:39:52.915656] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.193 16:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:22:58.193 00:22:58.193 real 0m18.670s 00:22:58.193 user 0m31.729s 00:22:58.193 sys 0m3.063s 00:22:58.193 16:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.193 16:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 ************************************ 00:22:58.193 END TEST raid_superblock_test 00:22:58.193 ************************************ 00:22:58.193 16:39:54 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:22:58.193 16:39:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:58.193 16:39:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.193 16:39:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 ************************************ 00:22:58.193 START TEST raid_read_error_test 00:22:58.193 ************************************ 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.mmEq9im5ro 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1695352 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1695352 /var/tmp/spdk-raid.sock 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1695352 ']' 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:58.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.193 16:39:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.193 [2024-07-24 16:39:54.788218] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:22:58.193 [2024-07-24 16:39:54.788340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695352 ] 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.193 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:58.193 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.194 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:58.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.194 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:58.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.194 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:58.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.194 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:58.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.194 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:58.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.194 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:58.194 [2024-07-24 16:39:55.013707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.453 [2024-07-24 16:39:55.296663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.021 [2024-07-24 16:39:55.605121] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.021 [2024-07-24 16:39:55.605165] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.021 16:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.021 16:39:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:59.021 16:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:59.021 16:39:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:59.280 BaseBdev1_malloc 00:22:59.280 16:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:59.539 true 00:22:59.539 16:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:59.798 [2024-07-24 16:39:56.480448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:59.798 [2024-07-24 16:39:56.480511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.798 [2024-07-24 16:39:56.480538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:22:59.798 [2024-07-24 16:39:56.480561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.798 [2024-07-24 16:39:56.483357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.798 [2024-07-24 16:39:56.483397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:59.798 BaseBdev1 00:22:59.798 16:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:59.798 16:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:00.057 BaseBdev2_malloc 00:23:00.057 16:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:00.317 true 00:23:00.317 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:00.575 [2024-07-24 16:39:57.213834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:00.575 [2024-07-24 16:39:57.213896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.575 [2024-07-24 16:39:57.213922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:23:00.575 [2024-07-24 16:39:57.213943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.575 [2024-07-24 16:39:57.216718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.575 [2024-07-24 16:39:57.216756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:00.575 BaseBdev2 00:23:00.575 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:00.575 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:00.834 BaseBdev3_malloc 00:23:00.834 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:01.093 true 00:23:01.093 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:01.093 [2024-07-24 16:39:57.954385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:01.093 [2024-07-24 16:39:57.954446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.093 [2024-07-24 16:39:57.954475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:23:01.093 [2024-07-24 16:39:57.954493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.352 [2024-07-24 16:39:57.957282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.352 [2024-07-24 16:39:57.957320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:01.352 BaseBdev3 00:23:01.352 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:01.352 16:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:01.611 BaseBdev4_malloc 00:23:01.611 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:01.611 true 00:23:01.611 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:01.869 [2024-07-24 16:39:58.661554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:01.869 [2024-07-24 16:39:58.661618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.869 [2024-07-24 16:39:58.661645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:01.869 [2024-07-24 16:39:58.661663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.869 [2024-07-24 16:39:58.664468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.869 [2024-07-24 16:39:58.664508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:01.869 BaseBdev4 00:23:01.869 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:02.128 [2024-07-24 16:39:58.886221] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.128 [2024-07-24 16:39:58.888576] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.128 [2024-07-24 16:39:58.888670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:02.128 [2024-07-24 16:39:58.888751] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:02.128 [2024-07-24 16:39:58.889048] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:23:02.128 [2024-07-24 16:39:58.889070] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:02.128 [2024-07-24 16:39:58.889419] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:02.128 [2024-07-24 16:39:58.889681] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:23:02.128 [2024-07-24 16:39:58.889698] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:23:02.129 [2024-07-24 16:39:58.889904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.129 16:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.388 16:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.388 "name": "raid_bdev1", 00:23:02.388 "uuid": "e0952a47-008e-46c6-a42d-c72e97ecae6e", 00:23:02.388 "strip_size_kb": 64, 00:23:02.388 "state": "online", 00:23:02.388 "raid_level": "raid0", 00:23:02.388 "superblock": true, 00:23:02.388 "num_base_bdevs": 4, 00:23:02.388 "num_base_bdevs_discovered": 4, 00:23:02.388 "num_base_bdevs_operational": 4, 00:23:02.388 "base_bdevs_list": [ 00:23:02.388 { 00:23:02.388 "name": "BaseBdev1", 00:23:02.388 "uuid": "0a699f87-0361-5460-974f-a6c119ba78a8", 00:23:02.388 "is_configured": true, 00:23:02.388 "data_offset": 2048, 00:23:02.388 "data_size": 63488 00:23:02.388 }, 00:23:02.388 { 00:23:02.388 "name": "BaseBdev2", 00:23:02.388 "uuid": "2b76fe41-deab-5bd8-8100-50afcac2415a", 00:23:02.388 "is_configured": true, 00:23:02.388 "data_offset": 2048, 00:23:02.388 "data_size": 63488 00:23:02.388 }, 00:23:02.388 { 00:23:02.388 "name": "BaseBdev3", 00:23:02.388 "uuid": "30b6cbff-27f7-5cf5-801f-a838abfcf626", 00:23:02.388 "is_configured": true, 00:23:02.388 "data_offset": 2048, 00:23:02.388 "data_size": 63488 00:23:02.388 }, 00:23:02.388 { 00:23:02.388 "name": "BaseBdev4", 00:23:02.388 "uuid": "1f88fe3e-9e87-5ecc-ba7d-62d9a04e914a", 00:23:02.388 "is_configured": true, 00:23:02.388 "data_offset": 2048, 00:23:02.388 "data_size": 63488 00:23:02.388 } 00:23:02.388 ] 00:23:02.388 }' 00:23:02.388 16:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.388 16:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.956 16:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:23:02.956 16:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:02.956 [2024-07-24 16:39:59.774353] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:03.893 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.153 16:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.412 16:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.412 "name": "raid_bdev1", 00:23:04.412 "uuid": "e0952a47-008e-46c6-a42d-c72e97ecae6e", 00:23:04.412 "strip_size_kb": 64, 00:23:04.412 "state": "online", 00:23:04.412 "raid_level": "raid0", 00:23:04.412 "superblock": true, 00:23:04.412 "num_base_bdevs": 4, 00:23:04.412 "num_base_bdevs_discovered": 4, 00:23:04.412 "num_base_bdevs_operational": 4, 00:23:04.412 "base_bdevs_list": [ 00:23:04.412 { 00:23:04.412 "name": "BaseBdev1", 00:23:04.412 "uuid": "0a699f87-0361-5460-974f-a6c119ba78a8", 00:23:04.412 "is_configured": true, 00:23:04.412 "data_offset": 2048, 00:23:04.412 "data_size": 63488 00:23:04.412 }, 00:23:04.412 { 00:23:04.412 "name": "BaseBdev2", 00:23:04.412 "uuid": "2b76fe41-deab-5bd8-8100-50afcac2415a", 00:23:04.412 "is_configured": true, 00:23:04.412 "data_offset": 2048, 00:23:04.412 "data_size": 63488 00:23:04.412 }, 00:23:04.412 { 00:23:04.412 "name": "BaseBdev3", 00:23:04.412 "uuid": "30b6cbff-27f7-5cf5-801f-a838abfcf626", 00:23:04.412 "is_configured": true, 00:23:04.412 "data_offset": 2048, 00:23:04.412 "data_size": 63488 00:23:04.412 }, 00:23:04.412 { 00:23:04.412 "name": "BaseBdev4", 00:23:04.412 "uuid": "1f88fe3e-9e87-5ecc-ba7d-62d9a04e914a", 00:23:04.412 "is_configured": true, 00:23:04.412 "data_offset": 2048, 00:23:04.412 "data_size": 63488 00:23:04.412 } 00:23:04.412 ] 00:23:04.412 }' 00:23:04.412 16:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.412 16:40:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:05.351 16:40:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:05.351 [2024-07-24 16:40:02.187015] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.351 [2024-07-24 16:40:02.187060] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.351 [2024-07-24 16:40:02.190342] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.351 [2024-07-24 16:40:02.190403] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.351 [2024-07-24 16:40:02.190457] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.351 [2024-07-24 16:40:02.190482] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:23:05.351 0 00:23:05.351 16:40:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1695352 00:23:05.351 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1695352 ']' 00:23:05.351 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1695352 00:23:05.351 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1695352 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1695352' 00:23:05.611 killing process with pid 1695352 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1695352 00:23:05.611 [2024-07-24 16:40:02.264528] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.611 16:40:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1695352 00:23:05.871 [2024-07-24 16:40:02.623855] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.mmEq9im5ro 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.42 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.42 != \0\.\0\0 ]] 00:23:07.777 00:23:07.777 real 0m9.751s 00:23:07.777 user 0m14.046s 00:23:07.777 sys 0m1.483s 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:07.777 16:40:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.777 ************************************ 00:23:07.777 END TEST raid_read_error_test 00:23:07.777 ************************************ 00:23:07.777 16:40:04 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:07.777 16:40:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:07.777 16:40:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:07.777 16:40:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:07.777 ************************************ 00:23:07.777 START TEST raid_write_error_test 00:23:07.777 ************************************ 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:23:07.777 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.1ElBKMMCSj 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1697036 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1697036 /var/tmp/spdk-raid.sock 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1697036 ']' 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:07.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.778 16:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.778 [2024-07-24 16:40:04.609323] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:23:07.778 [2024-07-24 16:40:04.609417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697036 ] 00:23:08.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.037 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:08.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.037 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:08.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.037 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:08.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.037 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:08.037 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:08.038 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:08.038 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:08.038 [2024-07-24 16:40:04.809220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.298 [2024-07-24 16:40:05.093810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.908 [2024-07-24 16:40:05.436177] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:08.908 [2024-07-24 16:40:05.436220] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:08.908 16:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:08.908 16:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:08.908 16:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:08.908 16:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:09.166 BaseBdev1_malloc 00:23:09.166 16:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:09.425 true 00:23:09.425 16:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:09.685 [2024-07-24 16:40:06.333276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:09.685 [2024-07-24 16:40:06.333337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.685 [2024-07-24 16:40:06.333366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:23:09.685 [2024-07-24 16:40:06.333388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.685 [2024-07-24 16:40:06.336131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.685 [2024-07-24 16:40:06.336179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:09.685 BaseBdev1 00:23:09.685 16:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:09.685 16:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:09.943 BaseBdev2_malloc 00:23:09.943 16:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:10.202 true 00:23:10.202 16:40:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:10.202 [2024-07-24 16:40:07.057188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:10.202 [2024-07-24 16:40:07.057249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.202 [2024-07-24 16:40:07.057275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:23:10.202 [2024-07-24 16:40:07.057295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.202 [2024-07-24 16:40:07.060036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.202 [2024-07-24 16:40:07.060075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:10.202 BaseBdev2 00:23:10.460 16:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:10.461 16:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:10.800 BaseBdev3_malloc 00:23:10.800 16:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:10.800 true 00:23:10.800 16:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:11.059 [2024-07-24 16:40:07.765347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:11.059 [2024-07-24 16:40:07.765403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.059 [2024-07-24 16:40:07.765427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:23:11.059 [2024-07-24 16:40:07.765445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.059 [2024-07-24 16:40:07.768160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.059 [2024-07-24 16:40:07.768196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:11.059 BaseBdev3 00:23:11.059 16:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:11.059 16:40:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:11.318 BaseBdev4_malloc 00:23:11.318 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:11.576 true 00:23:11.576 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:11.835 [2024-07-24 16:40:08.481191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:11.835 [2024-07-24 16:40:08.481256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:11.835 [2024-07-24 16:40:08.481284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:11.835 [2024-07-24 16:40:08.481303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:11.835 [2024-07-24 16:40:08.484077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:11.835 [2024-07-24 16:40:08.484115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:11.835 BaseBdev4 00:23:11.835 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:12.094 [2024-07-24 16:40:08.709842] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:12.094 [2024-07-24 16:40:08.712204] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:12.094 [2024-07-24 16:40:08.712299] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:12.094 [2024-07-24 16:40:08.712381] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:12.094 [2024-07-24 16:40:08.712663] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:23:12.094 [2024-07-24 16:40:08.712684] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:12.094 [2024-07-24 16:40:08.713040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:12.094 [2024-07-24 16:40:08.713329] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:23:12.094 [2024-07-24 16:40:08.713348] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:23:12.094 [2024-07-24 16:40:08.713568] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.094 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.354 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.354 "name": "raid_bdev1", 00:23:12.354 "uuid": "d2ca6045-a32e-49e0-9c51-1efa878d2c8a", 00:23:12.354 "strip_size_kb": 64, 00:23:12.354 "state": "online", 00:23:12.354 "raid_level": "raid0", 00:23:12.354 "superblock": true, 00:23:12.354 "num_base_bdevs": 4, 00:23:12.354 "num_base_bdevs_discovered": 4, 00:23:12.354 "num_base_bdevs_operational": 4, 00:23:12.354 "base_bdevs_list": [ 00:23:12.354 { 00:23:12.354 "name": "BaseBdev1", 00:23:12.354 "uuid": "72b5a909-bf04-5655-9ee1-89eb5ac5d817", 00:23:12.354 "is_configured": true, 00:23:12.354 "data_offset": 2048, 00:23:12.354 "data_size": 63488 00:23:12.354 }, 00:23:12.354 { 00:23:12.354 "name": "BaseBdev2", 00:23:12.354 "uuid": "81c23943-5690-5339-805a-ea9d2d340219", 00:23:12.354 "is_configured": true, 00:23:12.354 "data_offset": 2048, 00:23:12.354 "data_size": 63488 00:23:12.354 }, 00:23:12.354 { 00:23:12.354 "name": "BaseBdev3", 00:23:12.354 "uuid": "c27a0796-1b47-5e23-845e-579155c3eac6", 00:23:12.354 "is_configured": true, 00:23:12.354 "data_offset": 2048, 00:23:12.354 "data_size": 63488 00:23:12.354 }, 00:23:12.354 { 00:23:12.354 "name": "BaseBdev4", 00:23:12.354 "uuid": "4516b240-9fcb-58e2-b8e9-c42801015ae0", 00:23:12.354 "is_configured": true, 00:23:12.354 "data_offset": 2048, 00:23:12.354 "data_size": 63488 00:23:12.354 } 00:23:12.354 ] 00:23:12.354 }' 00:23:12.354 16:40:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.354 16:40:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.922 16:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:23:12.922 16:40:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:12.922 [2024-07-24 16:40:09.622046] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:13.859 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.118 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.376 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.376 "name": "raid_bdev1", 00:23:14.376 "uuid": "d2ca6045-a32e-49e0-9c51-1efa878d2c8a", 00:23:14.377 "strip_size_kb": 64, 00:23:14.377 "state": "online", 00:23:14.377 "raid_level": "raid0", 00:23:14.377 "superblock": true, 00:23:14.377 "num_base_bdevs": 4, 00:23:14.377 "num_base_bdevs_discovered": 4, 00:23:14.377 "num_base_bdevs_operational": 4, 00:23:14.377 "base_bdevs_list": [ 00:23:14.377 { 00:23:14.377 "name": "BaseBdev1", 00:23:14.377 "uuid": "72b5a909-bf04-5655-9ee1-89eb5ac5d817", 00:23:14.377 "is_configured": true, 00:23:14.377 "data_offset": 2048, 00:23:14.377 "data_size": 63488 00:23:14.377 }, 00:23:14.377 { 00:23:14.377 "name": "BaseBdev2", 00:23:14.377 "uuid": "81c23943-5690-5339-805a-ea9d2d340219", 00:23:14.377 "is_configured": true, 00:23:14.377 "data_offset": 2048, 00:23:14.377 "data_size": 63488 00:23:14.377 }, 00:23:14.377 { 00:23:14.377 "name": "BaseBdev3", 00:23:14.377 "uuid": "c27a0796-1b47-5e23-845e-579155c3eac6", 00:23:14.377 "is_configured": true, 00:23:14.377 "data_offset": 2048, 00:23:14.377 "data_size": 63488 00:23:14.377 }, 00:23:14.377 { 00:23:14.377 "name": "BaseBdev4", 00:23:14.377 "uuid": "4516b240-9fcb-58e2-b8e9-c42801015ae0", 00:23:14.377 "is_configured": true, 00:23:14.377 "data_offset": 2048, 00:23:14.377 "data_size": 63488 00:23:14.377 } 00:23:14.377 ] 00:23:14.377 }' 00:23:14.377 16:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.377 16:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.943 16:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:14.943 [2024-07-24 16:40:11.782050] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.943 [2024-07-24 16:40:11.782096] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.943 [2024-07-24 16:40:11.785383] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.943 [2024-07-24 16:40:11.785443] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.943 [2024-07-24 16:40:11.785498] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.943 [2024-07-24 16:40:11.785529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:23:14.943 0 00:23:14.943 16:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1697036 00:23:14.943 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1697036 ']' 00:23:14.943 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1697036 00:23:14.943 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1697036 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1697036' 00:23:15.201 killing process with pid 1697036 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1697036 00:23:15.201 [2024-07-24 16:40:11.858659] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:15.201 16:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1697036 00:23:15.459 [2024-07-24 16:40:12.215535] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.1ElBKMMCSj 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:17.363 16:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:23:17.363 00:23:17.363 real 0m9.534s 00:23:17.363 user 0m13.618s 00:23:17.363 sys 0m1.440s 00:23:17.364 16:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.364 16:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.364 ************************************ 00:23:17.364 END TEST raid_write_error_test 00:23:17.364 ************************************ 00:23:17.364 16:40:14 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:23:17.364 16:40:14 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:17.364 16:40:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:17.364 16:40:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.364 16:40:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.364 ************************************ 00:23:17.364 START TEST raid_state_function_test 00:23:17.364 ************************************ 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1698720 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1698720' 00:23:17.364 Process raid pid: 1698720 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1698720 /var/tmp/spdk-raid.sock 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1698720 ']' 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:17.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.364 16:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:17.623 [2024-07-24 16:40:14.238135] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:23:17.623 [2024-07-24 16:40:14.238256] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:17.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.623 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:17.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:17.624 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:17.624 [2024-07-24 16:40:14.465329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.191 [2024-07-24 16:40:14.752428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.450 [2024-07-24 16:40:15.105669] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.450 [2024-07-24 16:40:15.105708] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:18.450 16:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.450 16:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:23:18.450 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:18.708 [2024-07-24 16:40:15.509162] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:18.709 [2024-07-24 16:40:15.509215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:18.709 [2024-07-24 16:40:15.509230] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:18.709 [2024-07-24 16:40:15.509247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:18.709 [2024-07-24 16:40:15.509259] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:18.709 [2024-07-24 16:40:15.509275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:18.709 [2024-07-24 16:40:15.509286] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:18.709 [2024-07-24 16:40:15.509302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.709 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.968 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:18.968 "name": "Existed_Raid", 00:23:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.968 "strip_size_kb": 64, 00:23:18.968 "state": "configuring", 00:23:18.968 "raid_level": "concat", 00:23:18.968 "superblock": false, 00:23:18.968 "num_base_bdevs": 4, 00:23:18.968 "num_base_bdevs_discovered": 0, 00:23:18.968 "num_base_bdevs_operational": 4, 00:23:18.968 "base_bdevs_list": [ 00:23:18.968 { 00:23:18.968 "name": "BaseBdev1", 00:23:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.968 "is_configured": false, 00:23:18.968 "data_offset": 0, 00:23:18.968 "data_size": 0 00:23:18.968 }, 00:23:18.968 { 00:23:18.968 "name": "BaseBdev2", 00:23:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.968 "is_configured": false, 00:23:18.968 "data_offset": 0, 00:23:18.968 "data_size": 0 00:23:18.968 }, 00:23:18.968 { 00:23:18.968 "name": "BaseBdev3", 00:23:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.968 "is_configured": false, 00:23:18.968 "data_offset": 0, 00:23:18.968 "data_size": 0 00:23:18.968 }, 00:23:18.968 { 00:23:18.968 "name": "BaseBdev4", 00:23:18.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.968 "is_configured": false, 00:23:18.968 "data_offset": 0, 00:23:18.968 "data_size": 0 00:23:18.968 } 00:23:18.968 ] 00:23:18.968 }' 00:23:18.968 16:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:18.968 16:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.535 16:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:19.794 [2024-07-24 16:40:16.515711] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:19.794 [2024-07-24 16:40:16.515750] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:23:19.794 16:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:20.053 [2024-07-24 16:40:16.740386] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.053 [2024-07-24 16:40:16.740434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.053 [2024-07-24 16:40:16.740447] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:20.053 [2024-07-24 16:40:16.740470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:20.053 [2024-07-24 16:40:16.740482] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:20.053 [2024-07-24 16:40:16.740498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:20.053 [2024-07-24 16:40:16.740509] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:20.053 [2024-07-24 16:40:16.740525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:20.053 16:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:20.312 [2024-07-24 16:40:17.013843] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.312 BaseBdev1 00:23:20.312 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:20.312 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:20.312 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:20.313 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:20.313 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:20.313 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:20.313 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:20.571 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:20.830 [ 00:23:20.830 { 00:23:20.830 "name": "BaseBdev1", 00:23:20.830 "aliases": [ 00:23:20.830 "4f7e34f3-e0f4-442a-a92b-03b715895dc5" 00:23:20.830 ], 00:23:20.830 "product_name": "Malloc disk", 00:23:20.830 "block_size": 512, 00:23:20.830 "num_blocks": 65536, 00:23:20.830 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:20.830 "assigned_rate_limits": { 00:23:20.830 "rw_ios_per_sec": 0, 00:23:20.830 "rw_mbytes_per_sec": 0, 00:23:20.830 "r_mbytes_per_sec": 0, 00:23:20.830 "w_mbytes_per_sec": 0 00:23:20.830 }, 00:23:20.830 "claimed": true, 00:23:20.830 "claim_type": "exclusive_write", 00:23:20.830 "zoned": false, 00:23:20.830 "supported_io_types": { 00:23:20.830 "read": true, 00:23:20.830 "write": true, 00:23:20.830 "unmap": true, 00:23:20.830 "flush": true, 00:23:20.830 "reset": true, 00:23:20.830 "nvme_admin": false, 00:23:20.830 "nvme_io": false, 00:23:20.830 "nvme_io_md": false, 00:23:20.830 "write_zeroes": true, 00:23:20.830 "zcopy": true, 00:23:20.830 "get_zone_info": false, 00:23:20.830 "zone_management": false, 00:23:20.830 "zone_append": false, 00:23:20.830 "compare": false, 00:23:20.830 "compare_and_write": false, 00:23:20.830 "abort": true, 00:23:20.830 "seek_hole": false, 00:23:20.830 "seek_data": false, 00:23:20.830 "copy": true, 00:23:20.830 "nvme_iov_md": false 00:23:20.830 }, 00:23:20.830 "memory_domains": [ 00:23:20.830 { 00:23:20.830 "dma_device_id": "system", 00:23:20.830 "dma_device_type": 1 00:23:20.830 }, 00:23:20.830 { 00:23:20.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.830 "dma_device_type": 2 00:23:20.830 } 00:23:20.830 ], 00:23:20.830 "driver_specific": {} 00:23:20.830 } 00:23:20.830 ] 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.830 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.089 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.089 "name": "Existed_Raid", 00:23:21.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.089 "strip_size_kb": 64, 00:23:21.089 "state": "configuring", 00:23:21.089 "raid_level": "concat", 00:23:21.089 "superblock": false, 00:23:21.089 "num_base_bdevs": 4, 00:23:21.089 "num_base_bdevs_discovered": 1, 00:23:21.089 "num_base_bdevs_operational": 4, 00:23:21.089 "base_bdevs_list": [ 00:23:21.089 { 00:23:21.089 "name": "BaseBdev1", 00:23:21.089 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:21.089 "is_configured": true, 00:23:21.089 "data_offset": 0, 00:23:21.089 "data_size": 65536 00:23:21.089 }, 00:23:21.089 { 00:23:21.089 "name": "BaseBdev2", 00:23:21.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.089 "is_configured": false, 00:23:21.089 "data_offset": 0, 00:23:21.089 "data_size": 0 00:23:21.089 }, 00:23:21.089 { 00:23:21.089 "name": "BaseBdev3", 00:23:21.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.089 "is_configured": false, 00:23:21.089 "data_offset": 0, 00:23:21.089 "data_size": 0 00:23:21.089 }, 00:23:21.089 { 00:23:21.089 "name": "BaseBdev4", 00:23:21.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.089 "is_configured": false, 00:23:21.089 "data_offset": 0, 00:23:21.089 "data_size": 0 00:23:21.089 } 00:23:21.089 ] 00:23:21.089 }' 00:23:21.089 16:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.089 16:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.657 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:21.657 [2024-07-24 16:40:18.486021] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:21.657 [2024-07-24 16:40:18.486077] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:23:21.657 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:21.915 [2024-07-24 16:40:18.714727] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.915 [2024-07-24 16:40:18.717012] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:21.915 [2024-07-24 16:40:18.717054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:21.916 [2024-07-24 16:40:18.717068] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:21.916 [2024-07-24 16:40:18.717084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:21.916 [2024-07-24 16:40:18.717101] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:21.916 [2024-07-24 16:40:18.717120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.916 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.174 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.174 "name": "Existed_Raid", 00:23:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.174 "strip_size_kb": 64, 00:23:22.174 "state": "configuring", 00:23:22.174 "raid_level": "concat", 00:23:22.174 "superblock": false, 00:23:22.174 "num_base_bdevs": 4, 00:23:22.174 "num_base_bdevs_discovered": 1, 00:23:22.174 "num_base_bdevs_operational": 4, 00:23:22.174 "base_bdevs_list": [ 00:23:22.174 { 00:23:22.174 "name": "BaseBdev1", 00:23:22.174 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:22.174 "is_configured": true, 00:23:22.174 "data_offset": 0, 00:23:22.174 "data_size": 65536 00:23:22.174 }, 00:23:22.174 { 00:23:22.174 "name": "BaseBdev2", 00:23:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.174 "is_configured": false, 00:23:22.174 "data_offset": 0, 00:23:22.174 "data_size": 0 00:23:22.174 }, 00:23:22.174 { 00:23:22.174 "name": "BaseBdev3", 00:23:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.174 "is_configured": false, 00:23:22.174 "data_offset": 0, 00:23:22.174 "data_size": 0 00:23:22.174 }, 00:23:22.174 { 00:23:22.174 "name": "BaseBdev4", 00:23:22.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.174 "is_configured": false, 00:23:22.174 "data_offset": 0, 00:23:22.174 "data_size": 0 00:23:22.174 } 00:23:22.174 ] 00:23:22.174 }' 00:23:22.174 16:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.174 16:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.741 16:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:22.999 [2024-07-24 16:40:19.793847] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:22.999 BaseBdev2 00:23:22.999 16:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:22.999 16:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:22.999 16:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:22.999 16:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:22.999 16:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:23.000 16:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:23.000 16:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:23.259 16:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:23.517 [ 00:23:23.517 { 00:23:23.517 "name": "BaseBdev2", 00:23:23.517 "aliases": [ 00:23:23.517 "5f815b14-a02a-4abd-8de1-7598709acd8f" 00:23:23.517 ], 00:23:23.517 "product_name": "Malloc disk", 00:23:23.517 "block_size": 512, 00:23:23.517 "num_blocks": 65536, 00:23:23.517 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:23.517 "assigned_rate_limits": { 00:23:23.517 "rw_ios_per_sec": 0, 00:23:23.518 "rw_mbytes_per_sec": 0, 00:23:23.518 "r_mbytes_per_sec": 0, 00:23:23.518 "w_mbytes_per_sec": 0 00:23:23.518 }, 00:23:23.518 "claimed": true, 00:23:23.518 "claim_type": "exclusive_write", 00:23:23.518 "zoned": false, 00:23:23.518 "supported_io_types": { 00:23:23.518 "read": true, 00:23:23.518 "write": true, 00:23:23.518 "unmap": true, 00:23:23.518 "flush": true, 00:23:23.518 "reset": true, 00:23:23.518 "nvme_admin": false, 00:23:23.518 "nvme_io": false, 00:23:23.518 "nvme_io_md": false, 00:23:23.518 "write_zeroes": true, 00:23:23.518 "zcopy": true, 00:23:23.518 "get_zone_info": false, 00:23:23.518 "zone_management": false, 00:23:23.518 "zone_append": false, 00:23:23.518 "compare": false, 00:23:23.518 "compare_and_write": false, 00:23:23.518 "abort": true, 00:23:23.518 "seek_hole": false, 00:23:23.518 "seek_data": false, 00:23:23.518 "copy": true, 00:23:23.518 "nvme_iov_md": false 00:23:23.518 }, 00:23:23.518 "memory_domains": [ 00:23:23.518 { 00:23:23.518 "dma_device_id": "system", 00:23:23.518 "dma_device_type": 1 00:23:23.518 }, 00:23:23.518 { 00:23:23.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.518 "dma_device_type": 2 00:23:23.518 } 00:23:23.518 ], 00:23:23.518 "driver_specific": {} 00:23:23.518 } 00:23:23.518 ] 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.518 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:23.791 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:23.791 "name": "Existed_Raid", 00:23:23.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.792 "strip_size_kb": 64, 00:23:23.792 "state": "configuring", 00:23:23.792 "raid_level": "concat", 00:23:23.792 "superblock": false, 00:23:23.792 "num_base_bdevs": 4, 00:23:23.792 "num_base_bdevs_discovered": 2, 00:23:23.792 "num_base_bdevs_operational": 4, 00:23:23.792 "base_bdevs_list": [ 00:23:23.792 { 00:23:23.792 "name": "BaseBdev1", 00:23:23.792 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:23.792 "is_configured": true, 00:23:23.792 "data_offset": 0, 00:23:23.792 "data_size": 65536 00:23:23.792 }, 00:23:23.792 { 00:23:23.792 "name": "BaseBdev2", 00:23:23.792 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:23.792 "is_configured": true, 00:23:23.792 "data_offset": 0, 00:23:23.792 "data_size": 65536 00:23:23.792 }, 00:23:23.792 { 00:23:23.792 "name": "BaseBdev3", 00:23:23.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.792 "is_configured": false, 00:23:23.792 "data_offset": 0, 00:23:23.792 "data_size": 0 00:23:23.792 }, 00:23:23.792 { 00:23:23.792 "name": "BaseBdev4", 00:23:23.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.792 "is_configured": false, 00:23:23.792 "data_offset": 0, 00:23:23.792 "data_size": 0 00:23:23.792 } 00:23:23.792 ] 00:23:23.792 }' 00:23:23.792 16:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:23.792 16:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.367 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:24.626 [2024-07-24 16:40:21.341113] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:24.626 BaseBdev3 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:24.626 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.883 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:25.141 [ 00:23:25.141 { 00:23:25.141 "name": "BaseBdev3", 00:23:25.141 "aliases": [ 00:23:25.141 "1d094abf-d451-47b9-90c8-28c5d72e52b2" 00:23:25.141 ], 00:23:25.141 "product_name": "Malloc disk", 00:23:25.141 "block_size": 512, 00:23:25.141 "num_blocks": 65536, 00:23:25.141 "uuid": "1d094abf-d451-47b9-90c8-28c5d72e52b2", 00:23:25.141 "assigned_rate_limits": { 00:23:25.141 "rw_ios_per_sec": 0, 00:23:25.141 "rw_mbytes_per_sec": 0, 00:23:25.141 "r_mbytes_per_sec": 0, 00:23:25.141 "w_mbytes_per_sec": 0 00:23:25.141 }, 00:23:25.141 "claimed": true, 00:23:25.141 "claim_type": "exclusive_write", 00:23:25.141 "zoned": false, 00:23:25.141 "supported_io_types": { 00:23:25.141 "read": true, 00:23:25.141 "write": true, 00:23:25.141 "unmap": true, 00:23:25.141 "flush": true, 00:23:25.141 "reset": true, 00:23:25.141 "nvme_admin": false, 00:23:25.141 "nvme_io": false, 00:23:25.141 "nvme_io_md": false, 00:23:25.141 "write_zeroes": true, 00:23:25.141 "zcopy": true, 00:23:25.141 "get_zone_info": false, 00:23:25.141 "zone_management": false, 00:23:25.141 "zone_append": false, 00:23:25.141 "compare": false, 00:23:25.141 "compare_and_write": false, 00:23:25.141 "abort": true, 00:23:25.141 "seek_hole": false, 00:23:25.141 "seek_data": false, 00:23:25.141 "copy": true, 00:23:25.141 "nvme_iov_md": false 00:23:25.141 }, 00:23:25.141 "memory_domains": [ 00:23:25.141 { 00:23:25.141 "dma_device_id": "system", 00:23:25.141 "dma_device_type": 1 00:23:25.141 }, 00:23:25.141 { 00:23:25.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.141 "dma_device_type": 2 00:23:25.141 } 00:23:25.141 ], 00:23:25.141 "driver_specific": {} 00:23:25.141 } 00:23:25.141 ] 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.141 16:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.399 16:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.399 "name": "Existed_Raid", 00:23:25.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.399 "strip_size_kb": 64, 00:23:25.399 "state": "configuring", 00:23:25.399 "raid_level": "concat", 00:23:25.399 "superblock": false, 00:23:25.399 "num_base_bdevs": 4, 00:23:25.399 "num_base_bdevs_discovered": 3, 00:23:25.399 "num_base_bdevs_operational": 4, 00:23:25.399 "base_bdevs_list": [ 00:23:25.399 { 00:23:25.399 "name": "BaseBdev1", 00:23:25.399 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:25.399 "is_configured": true, 00:23:25.399 "data_offset": 0, 00:23:25.399 "data_size": 65536 00:23:25.399 }, 00:23:25.399 { 00:23:25.399 "name": "BaseBdev2", 00:23:25.399 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:25.399 "is_configured": true, 00:23:25.399 "data_offset": 0, 00:23:25.399 "data_size": 65536 00:23:25.399 }, 00:23:25.399 { 00:23:25.399 "name": "BaseBdev3", 00:23:25.399 "uuid": "1d094abf-d451-47b9-90c8-28c5d72e52b2", 00:23:25.399 "is_configured": true, 00:23:25.399 "data_offset": 0, 00:23:25.399 "data_size": 65536 00:23:25.399 }, 00:23:25.399 { 00:23:25.399 "name": "BaseBdev4", 00:23:25.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.399 "is_configured": false, 00:23:25.399 "data_offset": 0, 00:23:25.399 "data_size": 0 00:23:25.399 } 00:23:25.399 ] 00:23:25.399 }' 00:23:25.399 16:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.399 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.965 16:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:26.223 [2024-07-24 16:40:22.872004] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:26.223 [2024-07-24 16:40:22.872052] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:23:26.223 [2024-07-24 16:40:22.872065] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:26.223 [2024-07-24 16:40:22.872455] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:26.223 [2024-07-24 16:40:22.872695] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:23:26.223 [2024-07-24 16:40:22.872714] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:23:26.223 [2024-07-24 16:40:22.873043] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.223 BaseBdev4 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:26.223 16:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:26.481 16:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:26.481 [ 00:23:26.481 { 00:23:26.481 "name": "BaseBdev4", 00:23:26.481 "aliases": [ 00:23:26.481 "3a96ff69-80f5-4712-8f3f-297037f628e2" 00:23:26.481 ], 00:23:26.481 "product_name": "Malloc disk", 00:23:26.481 "block_size": 512, 00:23:26.481 "num_blocks": 65536, 00:23:26.481 "uuid": "3a96ff69-80f5-4712-8f3f-297037f628e2", 00:23:26.481 "assigned_rate_limits": { 00:23:26.481 "rw_ios_per_sec": 0, 00:23:26.481 "rw_mbytes_per_sec": 0, 00:23:26.481 "r_mbytes_per_sec": 0, 00:23:26.481 "w_mbytes_per_sec": 0 00:23:26.481 }, 00:23:26.481 "claimed": true, 00:23:26.481 "claim_type": "exclusive_write", 00:23:26.481 "zoned": false, 00:23:26.481 "supported_io_types": { 00:23:26.481 "read": true, 00:23:26.481 "write": true, 00:23:26.481 "unmap": true, 00:23:26.481 "flush": true, 00:23:26.481 "reset": true, 00:23:26.481 "nvme_admin": false, 00:23:26.481 "nvme_io": false, 00:23:26.481 "nvme_io_md": false, 00:23:26.481 "write_zeroes": true, 00:23:26.481 "zcopy": true, 00:23:26.481 "get_zone_info": false, 00:23:26.481 "zone_management": false, 00:23:26.481 "zone_append": false, 00:23:26.481 "compare": false, 00:23:26.481 "compare_and_write": false, 00:23:26.481 "abort": true, 00:23:26.481 "seek_hole": false, 00:23:26.481 "seek_data": false, 00:23:26.481 "copy": true, 00:23:26.481 "nvme_iov_md": false 00:23:26.481 }, 00:23:26.481 "memory_domains": [ 00:23:26.481 { 00:23:26.482 "dma_device_id": "system", 00:23:26.482 "dma_device_type": 1 00:23:26.482 }, 00:23:26.482 { 00:23:26.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.482 "dma_device_type": 2 00:23:26.482 } 00:23:26.482 ], 00:23:26.482 "driver_specific": {} 00:23:26.482 } 00:23:26.482 ] 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.739 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.739 "name": "Existed_Raid", 00:23:26.739 "uuid": "834f75d1-dbeb-481b-aa94-d1c92294f632", 00:23:26.739 "strip_size_kb": 64, 00:23:26.739 "state": "online", 00:23:26.739 "raid_level": "concat", 00:23:26.739 "superblock": false, 00:23:26.739 "num_base_bdevs": 4, 00:23:26.739 "num_base_bdevs_discovered": 4, 00:23:26.739 "num_base_bdevs_operational": 4, 00:23:26.739 "base_bdevs_list": [ 00:23:26.739 { 00:23:26.739 "name": "BaseBdev1", 00:23:26.739 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:26.739 "is_configured": true, 00:23:26.740 "data_offset": 0, 00:23:26.740 "data_size": 65536 00:23:26.740 }, 00:23:26.740 { 00:23:26.740 "name": "BaseBdev2", 00:23:26.740 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:26.740 "is_configured": true, 00:23:26.740 "data_offset": 0, 00:23:26.740 "data_size": 65536 00:23:26.740 }, 00:23:26.740 { 00:23:26.740 "name": "BaseBdev3", 00:23:26.740 "uuid": "1d094abf-d451-47b9-90c8-28c5d72e52b2", 00:23:26.740 "is_configured": true, 00:23:26.740 "data_offset": 0, 00:23:26.740 "data_size": 65536 00:23:26.740 }, 00:23:26.740 { 00:23:26.740 "name": "BaseBdev4", 00:23:26.740 "uuid": "3a96ff69-80f5-4712-8f3f-297037f628e2", 00:23:26.740 "is_configured": true, 00:23:26.740 "data_offset": 0, 00:23:26.740 "data_size": 65536 00:23:26.740 } 00:23:26.740 ] 00:23:26.740 }' 00:23:26.740 16:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.740 16:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:27.305 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:27.563 [2024-07-24 16:40:24.368539] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.563 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:27.563 "name": "Existed_Raid", 00:23:27.563 "aliases": [ 00:23:27.563 "834f75d1-dbeb-481b-aa94-d1c92294f632" 00:23:27.563 ], 00:23:27.563 "product_name": "Raid Volume", 00:23:27.563 "block_size": 512, 00:23:27.563 "num_blocks": 262144, 00:23:27.563 "uuid": "834f75d1-dbeb-481b-aa94-d1c92294f632", 00:23:27.563 "assigned_rate_limits": { 00:23:27.563 "rw_ios_per_sec": 0, 00:23:27.563 "rw_mbytes_per_sec": 0, 00:23:27.563 "r_mbytes_per_sec": 0, 00:23:27.563 "w_mbytes_per_sec": 0 00:23:27.563 }, 00:23:27.563 "claimed": false, 00:23:27.563 "zoned": false, 00:23:27.563 "supported_io_types": { 00:23:27.563 "read": true, 00:23:27.563 "write": true, 00:23:27.563 "unmap": true, 00:23:27.563 "flush": true, 00:23:27.563 "reset": true, 00:23:27.563 "nvme_admin": false, 00:23:27.563 "nvme_io": false, 00:23:27.563 "nvme_io_md": false, 00:23:27.563 "write_zeroes": true, 00:23:27.563 "zcopy": false, 00:23:27.563 "get_zone_info": false, 00:23:27.563 "zone_management": false, 00:23:27.563 "zone_append": false, 00:23:27.563 "compare": false, 00:23:27.563 "compare_and_write": false, 00:23:27.563 "abort": false, 00:23:27.563 "seek_hole": false, 00:23:27.563 "seek_data": false, 00:23:27.563 "copy": false, 00:23:27.563 "nvme_iov_md": false 00:23:27.563 }, 00:23:27.563 "memory_domains": [ 00:23:27.563 { 00:23:27.563 "dma_device_id": "system", 00:23:27.563 "dma_device_type": 1 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.563 "dma_device_type": 2 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "system", 00:23:27.563 "dma_device_type": 1 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.563 "dma_device_type": 2 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "system", 00:23:27.563 "dma_device_type": 1 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.563 "dma_device_type": 2 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "system", 00:23:27.563 "dma_device_type": 1 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.563 "dma_device_type": 2 00:23:27.563 } 00:23:27.563 ], 00:23:27.563 "driver_specific": { 00:23:27.563 "raid": { 00:23:27.563 "uuid": "834f75d1-dbeb-481b-aa94-d1c92294f632", 00:23:27.563 "strip_size_kb": 64, 00:23:27.563 "state": "online", 00:23:27.563 "raid_level": "concat", 00:23:27.563 "superblock": false, 00:23:27.563 "num_base_bdevs": 4, 00:23:27.563 "num_base_bdevs_discovered": 4, 00:23:27.563 "num_base_bdevs_operational": 4, 00:23:27.563 "base_bdevs_list": [ 00:23:27.563 { 00:23:27.563 "name": "BaseBdev1", 00:23:27.563 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:27.563 "is_configured": true, 00:23:27.563 "data_offset": 0, 00:23:27.563 "data_size": 65536 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "name": "BaseBdev2", 00:23:27.563 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:27.563 "is_configured": true, 00:23:27.563 "data_offset": 0, 00:23:27.563 "data_size": 65536 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "name": "BaseBdev3", 00:23:27.563 "uuid": "1d094abf-d451-47b9-90c8-28c5d72e52b2", 00:23:27.563 "is_configured": true, 00:23:27.563 "data_offset": 0, 00:23:27.563 "data_size": 65536 00:23:27.563 }, 00:23:27.563 { 00:23:27.563 "name": "BaseBdev4", 00:23:27.563 "uuid": "3a96ff69-80f5-4712-8f3f-297037f628e2", 00:23:27.563 "is_configured": true, 00:23:27.563 "data_offset": 0, 00:23:27.563 "data_size": 65536 00:23:27.563 } 00:23:27.563 ] 00:23:27.563 } 00:23:27.563 } 00:23:27.563 }' 00:23:27.563 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:27.821 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:27.821 BaseBdev2 00:23:27.821 BaseBdev3 00:23:27.821 BaseBdev4' 00:23:27.821 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:27.821 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:27.821 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:27.821 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:27.821 "name": "BaseBdev1", 00:23:27.821 "aliases": [ 00:23:27.821 "4f7e34f3-e0f4-442a-a92b-03b715895dc5" 00:23:27.821 ], 00:23:27.821 "product_name": "Malloc disk", 00:23:27.821 "block_size": 512, 00:23:27.821 "num_blocks": 65536, 00:23:27.821 "uuid": "4f7e34f3-e0f4-442a-a92b-03b715895dc5", 00:23:27.821 "assigned_rate_limits": { 00:23:27.821 "rw_ios_per_sec": 0, 00:23:27.821 "rw_mbytes_per_sec": 0, 00:23:27.821 "r_mbytes_per_sec": 0, 00:23:27.821 "w_mbytes_per_sec": 0 00:23:27.821 }, 00:23:27.821 "claimed": true, 00:23:27.821 "claim_type": "exclusive_write", 00:23:27.821 "zoned": false, 00:23:27.821 "supported_io_types": { 00:23:27.821 "read": true, 00:23:27.821 "write": true, 00:23:27.821 "unmap": true, 00:23:27.821 "flush": true, 00:23:27.821 "reset": true, 00:23:27.821 "nvme_admin": false, 00:23:27.821 "nvme_io": false, 00:23:27.821 "nvme_io_md": false, 00:23:27.821 "write_zeroes": true, 00:23:27.821 "zcopy": true, 00:23:27.821 "get_zone_info": false, 00:23:27.821 "zone_management": false, 00:23:27.821 "zone_append": false, 00:23:27.821 "compare": false, 00:23:27.821 "compare_and_write": false, 00:23:27.821 "abort": true, 00:23:27.821 "seek_hole": false, 00:23:27.821 "seek_data": false, 00:23:27.821 "copy": true, 00:23:27.821 "nvme_iov_md": false 00:23:27.821 }, 00:23:27.821 "memory_domains": [ 00:23:27.821 { 00:23:27.821 "dma_device_id": "system", 00:23:27.821 "dma_device_type": 1 00:23:27.821 }, 00:23:27.821 { 00:23:27.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.821 "dma_device_type": 2 00:23:27.821 } 00:23:27.821 ], 00:23:27.821 "driver_specific": {} 00:23:27.821 }' 00:23:27.821 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:28.079 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.337 16:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.337 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:28.337 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.337 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:28.337 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:28.595 "name": "BaseBdev2", 00:23:28.595 "aliases": [ 00:23:28.595 "5f815b14-a02a-4abd-8de1-7598709acd8f" 00:23:28.595 ], 00:23:28.595 "product_name": "Malloc disk", 00:23:28.595 "block_size": 512, 00:23:28.595 "num_blocks": 65536, 00:23:28.595 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:28.595 "assigned_rate_limits": { 00:23:28.595 "rw_ios_per_sec": 0, 00:23:28.595 "rw_mbytes_per_sec": 0, 00:23:28.595 "r_mbytes_per_sec": 0, 00:23:28.595 "w_mbytes_per_sec": 0 00:23:28.595 }, 00:23:28.595 "claimed": true, 00:23:28.595 "claim_type": "exclusive_write", 00:23:28.595 "zoned": false, 00:23:28.595 "supported_io_types": { 00:23:28.595 "read": true, 00:23:28.595 "write": true, 00:23:28.595 "unmap": true, 00:23:28.595 "flush": true, 00:23:28.595 "reset": true, 00:23:28.595 "nvme_admin": false, 00:23:28.595 "nvme_io": false, 00:23:28.595 "nvme_io_md": false, 00:23:28.595 "write_zeroes": true, 00:23:28.595 "zcopy": true, 00:23:28.595 "get_zone_info": false, 00:23:28.595 "zone_management": false, 00:23:28.595 "zone_append": false, 00:23:28.595 "compare": false, 00:23:28.595 "compare_and_write": false, 00:23:28.595 "abort": true, 00:23:28.595 "seek_hole": false, 00:23:28.595 "seek_data": false, 00:23:28.595 "copy": true, 00:23:28.595 "nvme_iov_md": false 00:23:28.595 }, 00:23:28.595 "memory_domains": [ 00:23:28.595 { 00:23:28.595 "dma_device_id": "system", 00:23:28.595 "dma_device_type": 1 00:23:28.595 }, 00:23:28.595 { 00:23:28.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.595 "dma_device_type": 2 00:23:28.595 } 00:23:28.595 ], 00:23:28.595 "driver_specific": {} 00:23:28.595 }' 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:28.595 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:28.853 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.111 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.111 "name": "BaseBdev3", 00:23:29.111 "aliases": [ 00:23:29.111 "1d094abf-d451-47b9-90c8-28c5d72e52b2" 00:23:29.111 ], 00:23:29.111 "product_name": "Malloc disk", 00:23:29.111 "block_size": 512, 00:23:29.111 "num_blocks": 65536, 00:23:29.111 "uuid": "1d094abf-d451-47b9-90c8-28c5d72e52b2", 00:23:29.111 "assigned_rate_limits": { 00:23:29.111 "rw_ios_per_sec": 0, 00:23:29.111 "rw_mbytes_per_sec": 0, 00:23:29.111 "r_mbytes_per_sec": 0, 00:23:29.111 "w_mbytes_per_sec": 0 00:23:29.111 }, 00:23:29.111 "claimed": true, 00:23:29.111 "claim_type": "exclusive_write", 00:23:29.111 "zoned": false, 00:23:29.111 "supported_io_types": { 00:23:29.111 "read": true, 00:23:29.111 "write": true, 00:23:29.111 "unmap": true, 00:23:29.111 "flush": true, 00:23:29.111 "reset": true, 00:23:29.111 "nvme_admin": false, 00:23:29.111 "nvme_io": false, 00:23:29.111 "nvme_io_md": false, 00:23:29.111 "write_zeroes": true, 00:23:29.111 "zcopy": true, 00:23:29.111 "get_zone_info": false, 00:23:29.111 "zone_management": false, 00:23:29.111 "zone_append": false, 00:23:29.111 "compare": false, 00:23:29.111 "compare_and_write": false, 00:23:29.111 "abort": true, 00:23:29.111 "seek_hole": false, 00:23:29.111 "seek_data": false, 00:23:29.111 "copy": true, 00:23:29.111 "nvme_iov_md": false 00:23:29.111 }, 00:23:29.111 "memory_domains": [ 00:23:29.111 { 00:23:29.111 "dma_device_id": "system", 00:23:29.111 "dma_device_type": 1 00:23:29.111 }, 00:23:29.111 { 00:23:29.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.111 "dma_device_type": 2 00:23:29.111 } 00:23:29.111 ], 00:23:29.111 "driver_specific": {} 00:23:29.111 }' 00:23:29.111 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.111 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.111 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.111 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.111 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.370 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.370 16:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:29.370 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.629 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.629 "name": "BaseBdev4", 00:23:29.629 "aliases": [ 00:23:29.629 "3a96ff69-80f5-4712-8f3f-297037f628e2" 00:23:29.629 ], 00:23:29.629 "product_name": "Malloc disk", 00:23:29.629 "block_size": 512, 00:23:29.629 "num_blocks": 65536, 00:23:29.629 "uuid": "3a96ff69-80f5-4712-8f3f-297037f628e2", 00:23:29.629 "assigned_rate_limits": { 00:23:29.629 "rw_ios_per_sec": 0, 00:23:29.629 "rw_mbytes_per_sec": 0, 00:23:29.629 "r_mbytes_per_sec": 0, 00:23:29.629 "w_mbytes_per_sec": 0 00:23:29.629 }, 00:23:29.629 "claimed": true, 00:23:29.629 "claim_type": "exclusive_write", 00:23:29.629 "zoned": false, 00:23:29.629 "supported_io_types": { 00:23:29.629 "read": true, 00:23:29.629 "write": true, 00:23:29.629 "unmap": true, 00:23:29.629 "flush": true, 00:23:29.629 "reset": true, 00:23:29.629 "nvme_admin": false, 00:23:29.629 "nvme_io": false, 00:23:29.629 "nvme_io_md": false, 00:23:29.629 "write_zeroes": true, 00:23:29.629 "zcopy": true, 00:23:29.629 "get_zone_info": false, 00:23:29.629 "zone_management": false, 00:23:29.629 "zone_append": false, 00:23:29.629 "compare": false, 00:23:29.629 "compare_and_write": false, 00:23:29.629 "abort": true, 00:23:29.629 "seek_hole": false, 00:23:29.629 "seek_data": false, 00:23:29.629 "copy": true, 00:23:29.629 "nvme_iov_md": false 00:23:29.629 }, 00:23:29.629 "memory_domains": [ 00:23:29.629 { 00:23:29.629 "dma_device_id": "system", 00:23:29.629 "dma_device_type": 1 00:23:29.629 }, 00:23:29.629 { 00:23:29.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.629 "dma_device_type": 2 00:23:29.629 } 00:23:29.629 ], 00:23:29.629 "driver_specific": {} 00:23:29.629 }' 00:23:29.629 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.629 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.629 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.629 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.629 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.886 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:30.143 [2024-07-24 16:40:26.879015] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:30.143 [2024-07-24 16:40:26.879050] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.143 [2024-07-24 16:40:26.879108] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.143 16:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:30.401 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.401 "name": "Existed_Raid", 00:23:30.401 "uuid": "834f75d1-dbeb-481b-aa94-d1c92294f632", 00:23:30.401 "strip_size_kb": 64, 00:23:30.401 "state": "offline", 00:23:30.401 "raid_level": "concat", 00:23:30.401 "superblock": false, 00:23:30.401 "num_base_bdevs": 4, 00:23:30.401 "num_base_bdevs_discovered": 3, 00:23:30.401 "num_base_bdevs_operational": 3, 00:23:30.401 "base_bdevs_list": [ 00:23:30.401 { 00:23:30.401 "name": null, 00:23:30.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.401 "is_configured": false, 00:23:30.401 "data_offset": 0, 00:23:30.401 "data_size": 65536 00:23:30.401 }, 00:23:30.401 { 00:23:30.401 "name": "BaseBdev2", 00:23:30.401 "uuid": "5f815b14-a02a-4abd-8de1-7598709acd8f", 00:23:30.401 "is_configured": true, 00:23:30.401 "data_offset": 0, 00:23:30.401 "data_size": 65536 00:23:30.401 }, 00:23:30.401 { 00:23:30.401 "name": "BaseBdev3", 00:23:30.401 "uuid": "1d094abf-d451-47b9-90c8-28c5d72e52b2", 00:23:30.401 "is_configured": true, 00:23:30.401 "data_offset": 0, 00:23:30.402 "data_size": 65536 00:23:30.402 }, 00:23:30.402 { 00:23:30.402 "name": "BaseBdev4", 00:23:30.402 "uuid": "3a96ff69-80f5-4712-8f3f-297037f628e2", 00:23:30.402 "is_configured": true, 00:23:30.402 "data_offset": 0, 00:23:30.402 "data_size": 65536 00:23:30.402 } 00:23:30.402 ] 00:23:30.402 }' 00:23:30.402 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.402 16:40:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.966 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:30.966 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:30.966 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.966 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:31.224 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:31.224 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:31.224 16:40:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:31.482 [2024-07-24 16:40:28.181797] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:31.482 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:31.482 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:31.482 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.482 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:31.739 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:31.739 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:31.739 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:31.996 [2024-07-24 16:40:28.760414] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:32.254 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:32.254 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.254 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.254 16:40:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:32.511 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:32.512 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:32.512 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:32.512 [2024-07-24 16:40:29.364277] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:32.512 [2024-07-24 16:40:29.364333] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:23:32.769 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:32.769 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.769 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.769 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:33.027 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:33.027 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:33.027 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:33.027 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:33.027 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:33.027 16:40:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:33.285 BaseBdev2 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:33.285 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:33.542 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:33.800 [ 00:23:33.800 { 00:23:33.800 "name": "BaseBdev2", 00:23:33.800 "aliases": [ 00:23:33.800 "85eeeada-56db-4a70-86b9-3a8ae37be849" 00:23:33.800 ], 00:23:33.800 "product_name": "Malloc disk", 00:23:33.800 "block_size": 512, 00:23:33.800 "num_blocks": 65536, 00:23:33.800 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:33.800 "assigned_rate_limits": { 00:23:33.800 "rw_ios_per_sec": 0, 00:23:33.800 "rw_mbytes_per_sec": 0, 00:23:33.800 "r_mbytes_per_sec": 0, 00:23:33.800 "w_mbytes_per_sec": 0 00:23:33.800 }, 00:23:33.800 "claimed": false, 00:23:33.800 "zoned": false, 00:23:33.800 "supported_io_types": { 00:23:33.800 "read": true, 00:23:33.800 "write": true, 00:23:33.800 "unmap": true, 00:23:33.800 "flush": true, 00:23:33.800 "reset": true, 00:23:33.800 "nvme_admin": false, 00:23:33.800 "nvme_io": false, 00:23:33.800 "nvme_io_md": false, 00:23:33.800 "write_zeroes": true, 00:23:33.800 "zcopy": true, 00:23:33.800 "get_zone_info": false, 00:23:33.800 "zone_management": false, 00:23:33.800 "zone_append": false, 00:23:33.800 "compare": false, 00:23:33.800 "compare_and_write": false, 00:23:33.800 "abort": true, 00:23:33.800 "seek_hole": false, 00:23:33.800 "seek_data": false, 00:23:33.800 "copy": true, 00:23:33.800 "nvme_iov_md": false 00:23:33.800 }, 00:23:33.800 "memory_domains": [ 00:23:33.800 { 00:23:33.800 "dma_device_id": "system", 00:23:33.800 "dma_device_type": 1 00:23:33.800 }, 00:23:33.800 { 00:23:33.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:33.800 "dma_device_type": 2 00:23:33.800 } 00:23:33.800 ], 00:23:33.800 "driver_specific": {} 00:23:33.800 } 00:23:33.800 ] 00:23:33.800 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:33.800 16:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:33.800 16:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:33.800 16:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:34.058 BaseBdev3 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:34.058 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:34.316 16:40:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:34.575 [ 00:23:34.575 { 00:23:34.575 "name": "BaseBdev3", 00:23:34.575 "aliases": [ 00:23:34.575 "cc969914-6c8f-48b6-a283-7a30ba438001" 00:23:34.575 ], 00:23:34.575 "product_name": "Malloc disk", 00:23:34.575 "block_size": 512, 00:23:34.575 "num_blocks": 65536, 00:23:34.575 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:34.575 "assigned_rate_limits": { 00:23:34.575 "rw_ios_per_sec": 0, 00:23:34.575 "rw_mbytes_per_sec": 0, 00:23:34.575 "r_mbytes_per_sec": 0, 00:23:34.575 "w_mbytes_per_sec": 0 00:23:34.575 }, 00:23:34.575 "claimed": false, 00:23:34.575 "zoned": false, 00:23:34.575 "supported_io_types": { 00:23:34.575 "read": true, 00:23:34.575 "write": true, 00:23:34.575 "unmap": true, 00:23:34.575 "flush": true, 00:23:34.575 "reset": true, 00:23:34.575 "nvme_admin": false, 00:23:34.575 "nvme_io": false, 00:23:34.575 "nvme_io_md": false, 00:23:34.575 "write_zeroes": true, 00:23:34.575 "zcopy": true, 00:23:34.575 "get_zone_info": false, 00:23:34.575 "zone_management": false, 00:23:34.575 "zone_append": false, 00:23:34.575 "compare": false, 00:23:34.575 "compare_and_write": false, 00:23:34.575 "abort": true, 00:23:34.575 "seek_hole": false, 00:23:34.575 "seek_data": false, 00:23:34.575 "copy": true, 00:23:34.575 "nvme_iov_md": false 00:23:34.575 }, 00:23:34.575 "memory_domains": [ 00:23:34.575 { 00:23:34.575 "dma_device_id": "system", 00:23:34.575 "dma_device_type": 1 00:23:34.575 }, 00:23:34.575 { 00:23:34.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:34.575 "dma_device_type": 2 00:23:34.575 } 00:23:34.575 ], 00:23:34.575 "driver_specific": {} 00:23:34.575 } 00:23:34.575 ] 00:23:34.575 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:34.575 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:34.575 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:34.575 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:34.832 BaseBdev4 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:34.832 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.090 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:35.090 [ 00:23:35.090 { 00:23:35.090 "name": "BaseBdev4", 00:23:35.090 "aliases": [ 00:23:35.090 "87ab3580-d29d-4b93-8d4c-ce93e4517a31" 00:23:35.090 ], 00:23:35.090 "product_name": "Malloc disk", 00:23:35.090 "block_size": 512, 00:23:35.090 "num_blocks": 65536, 00:23:35.090 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:35.090 "assigned_rate_limits": { 00:23:35.090 "rw_ios_per_sec": 0, 00:23:35.090 "rw_mbytes_per_sec": 0, 00:23:35.090 "r_mbytes_per_sec": 0, 00:23:35.090 "w_mbytes_per_sec": 0 00:23:35.090 }, 00:23:35.090 "claimed": false, 00:23:35.090 "zoned": false, 00:23:35.090 "supported_io_types": { 00:23:35.090 "read": true, 00:23:35.090 "write": true, 00:23:35.090 "unmap": true, 00:23:35.090 "flush": true, 00:23:35.090 "reset": true, 00:23:35.090 "nvme_admin": false, 00:23:35.090 "nvme_io": false, 00:23:35.090 "nvme_io_md": false, 00:23:35.090 "write_zeroes": true, 00:23:35.090 "zcopy": true, 00:23:35.090 "get_zone_info": false, 00:23:35.090 "zone_management": false, 00:23:35.090 "zone_append": false, 00:23:35.090 "compare": false, 00:23:35.090 "compare_and_write": false, 00:23:35.090 "abort": true, 00:23:35.090 "seek_hole": false, 00:23:35.090 "seek_data": false, 00:23:35.090 "copy": true, 00:23:35.090 "nvme_iov_md": false 00:23:35.090 }, 00:23:35.090 "memory_domains": [ 00:23:35.090 { 00:23:35.090 "dma_device_id": "system", 00:23:35.090 "dma_device_type": 1 00:23:35.090 }, 00:23:35.090 { 00:23:35.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.090 "dma_device_type": 2 00:23:35.090 } 00:23:35.090 ], 00:23:35.090 "driver_specific": {} 00:23:35.090 } 00:23:35.090 ] 00:23:35.090 16:40:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:35.090 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:35.090 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:35.090 16:40:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:35.348 [2024-07-24 16:40:32.150319] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:35.348 [2024-07-24 16:40:32.150366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:35.348 [2024-07-24 16:40:32.150398] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:35.348 [2024-07-24 16:40:32.152716] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:35.348 [2024-07-24 16:40:32.152776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.348 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.605 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.605 "name": "Existed_Raid", 00:23:35.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.605 "strip_size_kb": 64, 00:23:35.605 "state": "configuring", 00:23:35.605 "raid_level": "concat", 00:23:35.605 "superblock": false, 00:23:35.605 "num_base_bdevs": 4, 00:23:35.605 "num_base_bdevs_discovered": 3, 00:23:35.605 "num_base_bdevs_operational": 4, 00:23:35.605 "base_bdevs_list": [ 00:23:35.605 { 00:23:35.605 "name": "BaseBdev1", 00:23:35.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.605 "is_configured": false, 00:23:35.605 "data_offset": 0, 00:23:35.605 "data_size": 0 00:23:35.605 }, 00:23:35.605 { 00:23:35.605 "name": "BaseBdev2", 00:23:35.605 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:35.605 "is_configured": true, 00:23:35.605 "data_offset": 0, 00:23:35.605 "data_size": 65536 00:23:35.605 }, 00:23:35.605 { 00:23:35.605 "name": "BaseBdev3", 00:23:35.605 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:35.605 "is_configured": true, 00:23:35.605 "data_offset": 0, 00:23:35.605 "data_size": 65536 00:23:35.605 }, 00:23:35.605 { 00:23:35.605 "name": "BaseBdev4", 00:23:35.605 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:35.605 "is_configured": true, 00:23:35.605 "data_offset": 0, 00:23:35.605 "data_size": 65536 00:23:35.605 } 00:23:35.605 ] 00:23:35.605 }' 00:23:35.605 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.605 16:40:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:36.169 16:40:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:36.427 [2024-07-24 16:40:33.169001] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.427 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:36.689 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.689 "name": "Existed_Raid", 00:23:36.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.689 "strip_size_kb": 64, 00:23:36.689 "state": "configuring", 00:23:36.689 "raid_level": "concat", 00:23:36.689 "superblock": false, 00:23:36.689 "num_base_bdevs": 4, 00:23:36.689 "num_base_bdevs_discovered": 2, 00:23:36.689 "num_base_bdevs_operational": 4, 00:23:36.689 "base_bdevs_list": [ 00:23:36.689 { 00:23:36.689 "name": "BaseBdev1", 00:23:36.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:36.689 "is_configured": false, 00:23:36.689 "data_offset": 0, 00:23:36.689 "data_size": 0 00:23:36.689 }, 00:23:36.689 { 00:23:36.689 "name": null, 00:23:36.689 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:36.689 "is_configured": false, 00:23:36.689 "data_offset": 0, 00:23:36.689 "data_size": 65536 00:23:36.689 }, 00:23:36.689 { 00:23:36.689 "name": "BaseBdev3", 00:23:36.689 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:36.689 "is_configured": true, 00:23:36.689 "data_offset": 0, 00:23:36.689 "data_size": 65536 00:23:36.689 }, 00:23:36.689 { 00:23:36.689 "name": "BaseBdev4", 00:23:36.689 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:36.689 "is_configured": true, 00:23:36.689 "data_offset": 0, 00:23:36.689 "data_size": 65536 00:23:36.689 } 00:23:36.689 ] 00:23:36.689 }' 00:23:36.689 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.689 16:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.252 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.252 16:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:37.512 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:37.512 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:37.810 [2024-07-24 16:40:34.487400] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:37.810 BaseBdev1 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:37.810 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:38.069 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:38.326 [ 00:23:38.326 { 00:23:38.326 "name": "BaseBdev1", 00:23:38.326 "aliases": [ 00:23:38.326 "ed01104a-802f-46cb-b0d3-878cbfdf8b0f" 00:23:38.326 ], 00:23:38.326 "product_name": "Malloc disk", 00:23:38.326 "block_size": 512, 00:23:38.326 "num_blocks": 65536, 00:23:38.326 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:38.326 "assigned_rate_limits": { 00:23:38.326 "rw_ios_per_sec": 0, 00:23:38.326 "rw_mbytes_per_sec": 0, 00:23:38.326 "r_mbytes_per_sec": 0, 00:23:38.326 "w_mbytes_per_sec": 0 00:23:38.326 }, 00:23:38.326 "claimed": true, 00:23:38.326 "claim_type": "exclusive_write", 00:23:38.327 "zoned": false, 00:23:38.327 "supported_io_types": { 00:23:38.327 "read": true, 00:23:38.327 "write": true, 00:23:38.327 "unmap": true, 00:23:38.327 "flush": true, 00:23:38.327 "reset": true, 00:23:38.327 "nvme_admin": false, 00:23:38.327 "nvme_io": false, 00:23:38.327 "nvme_io_md": false, 00:23:38.327 "write_zeroes": true, 00:23:38.327 "zcopy": true, 00:23:38.327 "get_zone_info": false, 00:23:38.327 "zone_management": false, 00:23:38.327 "zone_append": false, 00:23:38.327 "compare": false, 00:23:38.327 "compare_and_write": false, 00:23:38.327 "abort": true, 00:23:38.327 "seek_hole": false, 00:23:38.327 "seek_data": false, 00:23:38.327 "copy": true, 00:23:38.327 "nvme_iov_md": false 00:23:38.327 }, 00:23:38.327 "memory_domains": [ 00:23:38.327 { 00:23:38.327 "dma_device_id": "system", 00:23:38.327 "dma_device_type": 1 00:23:38.327 }, 00:23:38.327 { 00:23:38.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.327 "dma_device_type": 2 00:23:38.327 } 00:23:38.327 ], 00:23:38.327 "driver_specific": {} 00:23:38.327 } 00:23:38.327 ] 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.327 16:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.327 16:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.327 "name": "Existed_Raid", 00:23:38.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.327 "strip_size_kb": 64, 00:23:38.327 "state": "configuring", 00:23:38.327 "raid_level": "concat", 00:23:38.327 "superblock": false, 00:23:38.327 "num_base_bdevs": 4, 00:23:38.327 "num_base_bdevs_discovered": 3, 00:23:38.327 "num_base_bdevs_operational": 4, 00:23:38.327 "base_bdevs_list": [ 00:23:38.327 { 00:23:38.327 "name": "BaseBdev1", 00:23:38.327 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:38.327 "is_configured": true, 00:23:38.327 "data_offset": 0, 00:23:38.327 "data_size": 65536 00:23:38.327 }, 00:23:38.327 { 00:23:38.327 "name": null, 00:23:38.327 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:38.327 "is_configured": false, 00:23:38.327 "data_offset": 0, 00:23:38.327 "data_size": 65536 00:23:38.327 }, 00:23:38.327 { 00:23:38.327 "name": "BaseBdev3", 00:23:38.327 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:38.327 "is_configured": true, 00:23:38.327 "data_offset": 0, 00:23:38.327 "data_size": 65536 00:23:38.327 }, 00:23:38.327 { 00:23:38.327 "name": "BaseBdev4", 00:23:38.327 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:38.327 "is_configured": true, 00:23:38.327 "data_offset": 0, 00:23:38.327 "data_size": 65536 00:23:38.327 } 00:23:38.327 ] 00:23:38.327 }' 00:23:38.327 16:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.327 16:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.892 16:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.892 16:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:39.150 16:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:39.150 16:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:39.408 [2024-07-24 16:40:36.164013] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.408 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:39.666 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.666 "name": "Existed_Raid", 00:23:39.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:39.666 "strip_size_kb": 64, 00:23:39.666 "state": "configuring", 00:23:39.666 "raid_level": "concat", 00:23:39.666 "superblock": false, 00:23:39.666 "num_base_bdevs": 4, 00:23:39.666 "num_base_bdevs_discovered": 2, 00:23:39.666 "num_base_bdevs_operational": 4, 00:23:39.666 "base_bdevs_list": [ 00:23:39.666 { 00:23:39.666 "name": "BaseBdev1", 00:23:39.666 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:39.666 "is_configured": true, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 }, 00:23:39.666 { 00:23:39.666 "name": null, 00:23:39.666 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:39.666 "is_configured": false, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 }, 00:23:39.666 { 00:23:39.666 "name": null, 00:23:39.666 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:39.666 "is_configured": false, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 }, 00:23:39.666 { 00:23:39.666 "name": "BaseBdev4", 00:23:39.666 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:39.666 "is_configured": true, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 } 00:23:39.666 ] 00:23:39.666 }' 00:23:39.666 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.666 16:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.231 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:40.231 16:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.488 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:40.488 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:40.746 [2024-07-24 16:40:37.375307] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.746 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.004 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.004 "name": "Existed_Raid", 00:23:41.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.004 "strip_size_kb": 64, 00:23:41.004 "state": "configuring", 00:23:41.004 "raid_level": "concat", 00:23:41.004 "superblock": false, 00:23:41.004 "num_base_bdevs": 4, 00:23:41.004 "num_base_bdevs_discovered": 3, 00:23:41.004 "num_base_bdevs_operational": 4, 00:23:41.004 "base_bdevs_list": [ 00:23:41.004 { 00:23:41.004 "name": "BaseBdev1", 00:23:41.004 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:41.004 "is_configured": true, 00:23:41.004 "data_offset": 0, 00:23:41.004 "data_size": 65536 00:23:41.004 }, 00:23:41.004 { 00:23:41.004 "name": null, 00:23:41.004 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:41.004 "is_configured": false, 00:23:41.004 "data_offset": 0, 00:23:41.004 "data_size": 65536 00:23:41.004 }, 00:23:41.004 { 00:23:41.004 "name": "BaseBdev3", 00:23:41.004 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:41.004 "is_configured": true, 00:23:41.004 "data_offset": 0, 00:23:41.004 "data_size": 65536 00:23:41.004 }, 00:23:41.004 { 00:23:41.004 "name": "BaseBdev4", 00:23:41.004 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:41.004 "is_configured": true, 00:23:41.004 "data_offset": 0, 00:23:41.004 "data_size": 65536 00:23:41.004 } 00:23:41.004 ] 00:23:41.004 }' 00:23:41.004 16:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.004 16:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.569 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.569 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:41.569 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:41.569 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:41.827 [2024-07-24 16:40:38.506407] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.827 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.084 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.084 "name": "Existed_Raid", 00:23:42.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.084 "strip_size_kb": 64, 00:23:42.084 "state": "configuring", 00:23:42.084 "raid_level": "concat", 00:23:42.084 "superblock": false, 00:23:42.084 "num_base_bdevs": 4, 00:23:42.084 "num_base_bdevs_discovered": 2, 00:23:42.084 "num_base_bdevs_operational": 4, 00:23:42.084 "base_bdevs_list": [ 00:23:42.084 { 00:23:42.084 "name": null, 00:23:42.084 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:42.084 "is_configured": false, 00:23:42.084 "data_offset": 0, 00:23:42.084 "data_size": 65536 00:23:42.084 }, 00:23:42.084 { 00:23:42.084 "name": null, 00:23:42.084 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:42.084 "is_configured": false, 00:23:42.084 "data_offset": 0, 00:23:42.084 "data_size": 65536 00:23:42.084 }, 00:23:42.084 { 00:23:42.084 "name": "BaseBdev3", 00:23:42.084 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:42.084 "is_configured": true, 00:23:42.084 "data_offset": 0, 00:23:42.084 "data_size": 65536 00:23:42.084 }, 00:23:42.084 { 00:23:42.084 "name": "BaseBdev4", 00:23:42.084 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:42.084 "is_configured": true, 00:23:42.084 "data_offset": 0, 00:23:42.084 "data_size": 65536 00:23:42.084 } 00:23:42.084 ] 00:23:42.084 }' 00:23:42.084 16:40:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.084 16:40:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.651 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.651 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:42.909 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:42.909 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:43.167 [2024-07-24 16:40:39.795567] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.167 16:40:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.426 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.426 "name": "Existed_Raid", 00:23:43.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.426 "strip_size_kb": 64, 00:23:43.426 "state": "configuring", 00:23:43.426 "raid_level": "concat", 00:23:43.426 "superblock": false, 00:23:43.426 "num_base_bdevs": 4, 00:23:43.426 "num_base_bdevs_discovered": 3, 00:23:43.426 "num_base_bdevs_operational": 4, 00:23:43.426 "base_bdevs_list": [ 00:23:43.426 { 00:23:43.426 "name": null, 00:23:43.426 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:43.426 "is_configured": false, 00:23:43.426 "data_offset": 0, 00:23:43.426 "data_size": 65536 00:23:43.426 }, 00:23:43.426 { 00:23:43.426 "name": "BaseBdev2", 00:23:43.426 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:43.426 "is_configured": true, 00:23:43.426 "data_offset": 0, 00:23:43.426 "data_size": 65536 00:23:43.426 }, 00:23:43.426 { 00:23:43.426 "name": "BaseBdev3", 00:23:43.426 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:43.426 "is_configured": true, 00:23:43.426 "data_offset": 0, 00:23:43.426 "data_size": 65536 00:23:43.426 }, 00:23:43.426 { 00:23:43.426 "name": "BaseBdev4", 00:23:43.426 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:43.426 "is_configured": true, 00:23:43.426 "data_offset": 0, 00:23:43.426 "data_size": 65536 00:23:43.426 } 00:23:43.426 ] 00:23:43.426 }' 00:23:43.426 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.426 16:40:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.995 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.995 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:43.995 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:43.995 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.995 16:40:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:44.253 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ed01104a-802f-46cb-b0d3-878cbfdf8b0f 00:23:44.512 [2024-07-24 16:40:41.266966] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:44.512 [2024-07-24 16:40:41.267013] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:23:44.512 [2024-07-24 16:40:41.267025] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:44.512 [2024-07-24 16:40:41.267361] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:23:44.512 [2024-07-24 16:40:41.267570] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:23:44.512 [2024-07-24 16:40:41.267587] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:23:44.512 [2024-07-24 16:40:41.267882] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.512 NewBaseBdev 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:44.512 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.771 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:45.030 [ 00:23:45.030 { 00:23:45.030 "name": "NewBaseBdev", 00:23:45.030 "aliases": [ 00:23:45.030 "ed01104a-802f-46cb-b0d3-878cbfdf8b0f" 00:23:45.030 ], 00:23:45.030 "product_name": "Malloc disk", 00:23:45.030 "block_size": 512, 00:23:45.030 "num_blocks": 65536, 00:23:45.030 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:45.030 "assigned_rate_limits": { 00:23:45.030 "rw_ios_per_sec": 0, 00:23:45.030 "rw_mbytes_per_sec": 0, 00:23:45.030 "r_mbytes_per_sec": 0, 00:23:45.030 "w_mbytes_per_sec": 0 00:23:45.030 }, 00:23:45.030 "claimed": true, 00:23:45.030 "claim_type": "exclusive_write", 00:23:45.030 "zoned": false, 00:23:45.030 "supported_io_types": { 00:23:45.030 "read": true, 00:23:45.030 "write": true, 00:23:45.030 "unmap": true, 00:23:45.030 "flush": true, 00:23:45.030 "reset": true, 00:23:45.030 "nvme_admin": false, 00:23:45.030 "nvme_io": false, 00:23:45.030 "nvme_io_md": false, 00:23:45.030 "write_zeroes": true, 00:23:45.030 "zcopy": true, 00:23:45.030 "get_zone_info": false, 00:23:45.030 "zone_management": false, 00:23:45.030 "zone_append": false, 00:23:45.030 "compare": false, 00:23:45.030 "compare_and_write": false, 00:23:45.030 "abort": true, 00:23:45.030 "seek_hole": false, 00:23:45.030 "seek_data": false, 00:23:45.030 "copy": true, 00:23:45.030 "nvme_iov_md": false 00:23:45.030 }, 00:23:45.030 "memory_domains": [ 00:23:45.030 { 00:23:45.030 "dma_device_id": "system", 00:23:45.030 "dma_device_type": 1 00:23:45.030 }, 00:23:45.030 { 00:23:45.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.030 "dma_device_type": 2 00:23:45.030 } 00:23:45.030 ], 00:23:45.030 "driver_specific": {} 00:23:45.030 } 00:23:45.030 ] 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.030 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:45.289 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.289 "name": "Existed_Raid", 00:23:45.289 "uuid": "226191b1-8317-46bd-9d56-c8b125592fec", 00:23:45.289 "strip_size_kb": 64, 00:23:45.289 "state": "online", 00:23:45.289 "raid_level": "concat", 00:23:45.289 "superblock": false, 00:23:45.289 "num_base_bdevs": 4, 00:23:45.289 "num_base_bdevs_discovered": 4, 00:23:45.289 "num_base_bdevs_operational": 4, 00:23:45.289 "base_bdevs_list": [ 00:23:45.289 { 00:23:45.289 "name": "NewBaseBdev", 00:23:45.289 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:45.289 "is_configured": true, 00:23:45.289 "data_offset": 0, 00:23:45.289 "data_size": 65536 00:23:45.289 }, 00:23:45.289 { 00:23:45.289 "name": "BaseBdev2", 00:23:45.289 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:45.289 "is_configured": true, 00:23:45.289 "data_offset": 0, 00:23:45.289 "data_size": 65536 00:23:45.289 }, 00:23:45.289 { 00:23:45.289 "name": "BaseBdev3", 00:23:45.289 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:45.289 "is_configured": true, 00:23:45.289 "data_offset": 0, 00:23:45.289 "data_size": 65536 00:23:45.289 }, 00:23:45.289 { 00:23:45.289 "name": "BaseBdev4", 00:23:45.289 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:45.289 "is_configured": true, 00:23:45.289 "data_offset": 0, 00:23:45.290 "data_size": 65536 00:23:45.290 } 00:23:45.290 ] 00:23:45.290 }' 00:23:45.290 16:40:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.290 16:40:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:45.856 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:45.856 [2024-07-24 16:40:42.703508] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:46.115 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:46.115 "name": "Existed_Raid", 00:23:46.115 "aliases": [ 00:23:46.115 "226191b1-8317-46bd-9d56-c8b125592fec" 00:23:46.115 ], 00:23:46.115 "product_name": "Raid Volume", 00:23:46.115 "block_size": 512, 00:23:46.115 "num_blocks": 262144, 00:23:46.115 "uuid": "226191b1-8317-46bd-9d56-c8b125592fec", 00:23:46.115 "assigned_rate_limits": { 00:23:46.115 "rw_ios_per_sec": 0, 00:23:46.115 "rw_mbytes_per_sec": 0, 00:23:46.115 "r_mbytes_per_sec": 0, 00:23:46.115 "w_mbytes_per_sec": 0 00:23:46.115 }, 00:23:46.115 "claimed": false, 00:23:46.115 "zoned": false, 00:23:46.115 "supported_io_types": { 00:23:46.115 "read": true, 00:23:46.115 "write": true, 00:23:46.115 "unmap": true, 00:23:46.115 "flush": true, 00:23:46.115 "reset": true, 00:23:46.115 "nvme_admin": false, 00:23:46.115 "nvme_io": false, 00:23:46.115 "nvme_io_md": false, 00:23:46.115 "write_zeroes": true, 00:23:46.115 "zcopy": false, 00:23:46.115 "get_zone_info": false, 00:23:46.115 "zone_management": false, 00:23:46.115 "zone_append": false, 00:23:46.115 "compare": false, 00:23:46.115 "compare_and_write": false, 00:23:46.115 "abort": false, 00:23:46.115 "seek_hole": false, 00:23:46.115 "seek_data": false, 00:23:46.115 "copy": false, 00:23:46.115 "nvme_iov_md": false 00:23:46.115 }, 00:23:46.115 "memory_domains": [ 00:23:46.115 { 00:23:46.115 "dma_device_id": "system", 00:23:46.115 "dma_device_type": 1 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.115 "dma_device_type": 2 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "system", 00:23:46.115 "dma_device_type": 1 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.115 "dma_device_type": 2 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "system", 00:23:46.115 "dma_device_type": 1 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.115 "dma_device_type": 2 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "system", 00:23:46.115 "dma_device_type": 1 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.115 "dma_device_type": 2 00:23:46.115 } 00:23:46.115 ], 00:23:46.115 "driver_specific": { 00:23:46.115 "raid": { 00:23:46.115 "uuid": "226191b1-8317-46bd-9d56-c8b125592fec", 00:23:46.115 "strip_size_kb": 64, 00:23:46.115 "state": "online", 00:23:46.115 "raid_level": "concat", 00:23:46.115 "superblock": false, 00:23:46.115 "num_base_bdevs": 4, 00:23:46.115 "num_base_bdevs_discovered": 4, 00:23:46.115 "num_base_bdevs_operational": 4, 00:23:46.115 "base_bdevs_list": [ 00:23:46.115 { 00:23:46.115 "name": "NewBaseBdev", 00:23:46.115 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:46.115 "is_configured": true, 00:23:46.115 "data_offset": 0, 00:23:46.115 "data_size": 65536 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "name": "BaseBdev2", 00:23:46.115 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:46.115 "is_configured": true, 00:23:46.115 "data_offset": 0, 00:23:46.115 "data_size": 65536 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "name": "BaseBdev3", 00:23:46.115 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:46.115 "is_configured": true, 00:23:46.115 "data_offset": 0, 00:23:46.115 "data_size": 65536 00:23:46.115 }, 00:23:46.115 { 00:23:46.115 "name": "BaseBdev4", 00:23:46.115 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:46.115 "is_configured": true, 00:23:46.115 "data_offset": 0, 00:23:46.115 "data_size": 65536 00:23:46.115 } 00:23:46.115 ] 00:23:46.115 } 00:23:46.115 } 00:23:46.115 }' 00:23:46.115 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:46.115 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:46.115 BaseBdev2 00:23:46.115 BaseBdev3 00:23:46.115 BaseBdev4' 00:23:46.115 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.115 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:46.116 16:40:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.374 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.374 "name": "NewBaseBdev", 00:23:46.374 "aliases": [ 00:23:46.374 "ed01104a-802f-46cb-b0d3-878cbfdf8b0f" 00:23:46.374 ], 00:23:46.374 "product_name": "Malloc disk", 00:23:46.374 "block_size": 512, 00:23:46.374 "num_blocks": 65536, 00:23:46.374 "uuid": "ed01104a-802f-46cb-b0d3-878cbfdf8b0f", 00:23:46.374 "assigned_rate_limits": { 00:23:46.374 "rw_ios_per_sec": 0, 00:23:46.374 "rw_mbytes_per_sec": 0, 00:23:46.374 "r_mbytes_per_sec": 0, 00:23:46.374 "w_mbytes_per_sec": 0 00:23:46.374 }, 00:23:46.374 "claimed": true, 00:23:46.374 "claim_type": "exclusive_write", 00:23:46.374 "zoned": false, 00:23:46.374 "supported_io_types": { 00:23:46.374 "read": true, 00:23:46.374 "write": true, 00:23:46.374 "unmap": true, 00:23:46.374 "flush": true, 00:23:46.374 "reset": true, 00:23:46.375 "nvme_admin": false, 00:23:46.375 "nvme_io": false, 00:23:46.375 "nvme_io_md": false, 00:23:46.375 "write_zeroes": true, 00:23:46.375 "zcopy": true, 00:23:46.375 "get_zone_info": false, 00:23:46.375 "zone_management": false, 00:23:46.375 "zone_append": false, 00:23:46.375 "compare": false, 00:23:46.375 "compare_and_write": false, 00:23:46.375 "abort": true, 00:23:46.375 "seek_hole": false, 00:23:46.375 "seek_data": false, 00:23:46.375 "copy": true, 00:23:46.375 "nvme_iov_md": false 00:23:46.375 }, 00:23:46.375 "memory_domains": [ 00:23:46.375 { 00:23:46.375 "dma_device_id": "system", 00:23:46.375 "dma_device_type": 1 00:23:46.375 }, 00:23:46.375 { 00:23:46.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.375 "dma_device_type": 2 00:23:46.375 } 00:23:46.375 ], 00:23:46.375 "driver_specific": {} 00:23:46.375 }' 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.375 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:46.641 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.901 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.901 "name": "BaseBdev2", 00:23:46.902 "aliases": [ 00:23:46.902 "85eeeada-56db-4a70-86b9-3a8ae37be849" 00:23:46.902 ], 00:23:46.902 "product_name": "Malloc disk", 00:23:46.902 "block_size": 512, 00:23:46.902 "num_blocks": 65536, 00:23:46.902 "uuid": "85eeeada-56db-4a70-86b9-3a8ae37be849", 00:23:46.902 "assigned_rate_limits": { 00:23:46.902 "rw_ios_per_sec": 0, 00:23:46.902 "rw_mbytes_per_sec": 0, 00:23:46.902 "r_mbytes_per_sec": 0, 00:23:46.902 "w_mbytes_per_sec": 0 00:23:46.902 }, 00:23:46.902 "claimed": true, 00:23:46.902 "claim_type": "exclusive_write", 00:23:46.902 "zoned": false, 00:23:46.902 "supported_io_types": { 00:23:46.902 "read": true, 00:23:46.902 "write": true, 00:23:46.902 "unmap": true, 00:23:46.902 "flush": true, 00:23:46.902 "reset": true, 00:23:46.902 "nvme_admin": false, 00:23:46.902 "nvme_io": false, 00:23:46.902 "nvme_io_md": false, 00:23:46.902 "write_zeroes": true, 00:23:46.902 "zcopy": true, 00:23:46.902 "get_zone_info": false, 00:23:46.902 "zone_management": false, 00:23:46.902 "zone_append": false, 00:23:46.902 "compare": false, 00:23:46.902 "compare_and_write": false, 00:23:46.902 "abort": true, 00:23:46.902 "seek_hole": false, 00:23:46.902 "seek_data": false, 00:23:46.902 "copy": true, 00:23:46.902 "nvme_iov_md": false 00:23:46.902 }, 00:23:46.902 "memory_domains": [ 00:23:46.902 { 00:23:46.902 "dma_device_id": "system", 00:23:46.902 "dma_device_type": 1 00:23:46.902 }, 00:23:46.902 { 00:23:46.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.902 "dma_device_type": 2 00:23:46.902 } 00:23:46.902 ], 00:23:46.902 "driver_specific": {} 00:23:46.902 }' 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.902 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:47.160 16:40:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.418 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.418 "name": "BaseBdev3", 00:23:47.418 "aliases": [ 00:23:47.418 "cc969914-6c8f-48b6-a283-7a30ba438001" 00:23:47.418 ], 00:23:47.418 "product_name": "Malloc disk", 00:23:47.418 "block_size": 512, 00:23:47.418 "num_blocks": 65536, 00:23:47.418 "uuid": "cc969914-6c8f-48b6-a283-7a30ba438001", 00:23:47.418 "assigned_rate_limits": { 00:23:47.418 "rw_ios_per_sec": 0, 00:23:47.418 "rw_mbytes_per_sec": 0, 00:23:47.418 "r_mbytes_per_sec": 0, 00:23:47.418 "w_mbytes_per_sec": 0 00:23:47.418 }, 00:23:47.418 "claimed": true, 00:23:47.418 "claim_type": "exclusive_write", 00:23:47.418 "zoned": false, 00:23:47.418 "supported_io_types": { 00:23:47.418 "read": true, 00:23:47.418 "write": true, 00:23:47.418 "unmap": true, 00:23:47.418 "flush": true, 00:23:47.418 "reset": true, 00:23:47.418 "nvme_admin": false, 00:23:47.418 "nvme_io": false, 00:23:47.418 "nvme_io_md": false, 00:23:47.418 "write_zeroes": true, 00:23:47.418 "zcopy": true, 00:23:47.418 "get_zone_info": false, 00:23:47.418 "zone_management": false, 00:23:47.418 "zone_append": false, 00:23:47.418 "compare": false, 00:23:47.418 "compare_and_write": false, 00:23:47.418 "abort": true, 00:23:47.418 "seek_hole": false, 00:23:47.418 "seek_data": false, 00:23:47.418 "copy": true, 00:23:47.418 "nvme_iov_md": false 00:23:47.418 }, 00:23:47.418 "memory_domains": [ 00:23:47.418 { 00:23:47.418 "dma_device_id": "system", 00:23:47.418 "dma_device_type": 1 00:23:47.418 }, 00:23:47.418 { 00:23:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.418 "dma_device_type": 2 00:23:47.418 } 00:23:47.418 ], 00:23:47.418 "driver_specific": {} 00:23:47.418 }' 00:23:47.418 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.418 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.418 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:47.418 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.418 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:47.677 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.936 "name": "BaseBdev4", 00:23:47.936 "aliases": [ 00:23:47.936 "87ab3580-d29d-4b93-8d4c-ce93e4517a31" 00:23:47.936 ], 00:23:47.936 "product_name": "Malloc disk", 00:23:47.936 "block_size": 512, 00:23:47.936 "num_blocks": 65536, 00:23:47.936 "uuid": "87ab3580-d29d-4b93-8d4c-ce93e4517a31", 00:23:47.936 "assigned_rate_limits": { 00:23:47.936 "rw_ios_per_sec": 0, 00:23:47.936 "rw_mbytes_per_sec": 0, 00:23:47.936 "r_mbytes_per_sec": 0, 00:23:47.936 "w_mbytes_per_sec": 0 00:23:47.936 }, 00:23:47.936 "claimed": true, 00:23:47.936 "claim_type": "exclusive_write", 00:23:47.936 "zoned": false, 00:23:47.936 "supported_io_types": { 00:23:47.936 "read": true, 00:23:47.936 "write": true, 00:23:47.936 "unmap": true, 00:23:47.936 "flush": true, 00:23:47.936 "reset": true, 00:23:47.936 "nvme_admin": false, 00:23:47.936 "nvme_io": false, 00:23:47.936 "nvme_io_md": false, 00:23:47.936 "write_zeroes": true, 00:23:47.936 "zcopy": true, 00:23:47.936 "get_zone_info": false, 00:23:47.936 "zone_management": false, 00:23:47.936 "zone_append": false, 00:23:47.936 "compare": false, 00:23:47.936 "compare_and_write": false, 00:23:47.936 "abort": true, 00:23:47.936 "seek_hole": false, 00:23:47.936 "seek_data": false, 00:23:47.936 "copy": true, 00:23:47.936 "nvme_iov_md": false 00:23:47.936 }, 00:23:47.936 "memory_domains": [ 00:23:47.936 { 00:23:47.936 "dma_device_id": "system", 00:23:47.936 "dma_device_type": 1 00:23:47.936 }, 00:23:47.936 { 00:23:47.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.936 "dma_device_type": 2 00:23:47.936 } 00:23:47.936 ], 00:23:47.936 "driver_specific": {} 00:23:47.936 }' 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.936 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.195 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:48.195 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:48.195 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:48.195 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:48.195 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:48.195 16:40:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:48.454 [2024-07-24 16:40:45.137750] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:48.454 [2024-07-24 16:40:45.137784] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.454 [2024-07-24 16:40:45.137870] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.454 [2024-07-24 16:40:45.137950] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.454 [2024-07-24 16:40:45.137966] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1698720 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1698720 ']' 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1698720 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1698720 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1698720' 00:23:48.454 killing process with pid 1698720 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1698720 00:23:48.454 [2024-07-24 16:40:45.214542] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.454 16:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1698720 00:23:49.021 [2024-07-24 16:40:45.664959] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:50.982 00:23:50.982 real 0m33.226s 00:23:50.982 user 0m58.131s 00:23:50.982 sys 0m5.720s 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:50.982 ************************************ 00:23:50.982 END TEST raid_state_function_test 00:23:50.982 ************************************ 00:23:50.982 16:40:47 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:23:50.982 16:40:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:50.982 16:40:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.982 16:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:50.982 ************************************ 00:23:50.982 START TEST raid_state_function_test_sb 00:23:50.982 ************************************ 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1704935 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1704935' 00:23:50.982 Process raid pid: 1704935 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1704935 /var/tmp/spdk-raid.sock 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1704935 ']' 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:50.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.982 16:40:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:50.982 [2024-07-24 16:40:47.541219] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:23:50.982 [2024-07-24 16:40:47.541335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:50.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.982 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:50.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:50.983 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:50.983 [2024-07-24 16:40:47.770004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.243 [2024-07-24 16:40:48.067438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.811 [2024-07-24 16:40:48.432000] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:51.811 [2024-07-24 16:40:48.432037] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:51.811 16:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.811 16:40:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:23:51.811 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:52.071 [2024-07-24 16:40:48.836031] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:52.071 [2024-07-24 16:40:48.836086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:52.071 [2024-07-24 16:40:48.836101] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:52.071 [2024-07-24 16:40:48.836118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:52.071 [2024-07-24 16:40:48.836129] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:52.071 [2024-07-24 16:40:48.836153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:52.071 [2024-07-24 16:40:48.836165] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:52.071 [2024-07-24 16:40:48.836181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.071 16:40:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.330 16:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:52.330 "name": "Existed_Raid", 00:23:52.330 "uuid": "b3952668-0e4e-4a66-84b5-43f6d1fb4d27", 00:23:52.330 "strip_size_kb": 64, 00:23:52.330 "state": "configuring", 00:23:52.330 "raid_level": "concat", 00:23:52.330 "superblock": true, 00:23:52.330 "num_base_bdevs": 4, 00:23:52.330 "num_base_bdevs_discovered": 0, 00:23:52.330 "num_base_bdevs_operational": 4, 00:23:52.330 "base_bdevs_list": [ 00:23:52.330 { 00:23:52.330 "name": "BaseBdev1", 00:23:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.330 "is_configured": false, 00:23:52.330 "data_offset": 0, 00:23:52.330 "data_size": 0 00:23:52.330 }, 00:23:52.330 { 00:23:52.330 "name": "BaseBdev2", 00:23:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.330 "is_configured": false, 00:23:52.330 "data_offset": 0, 00:23:52.330 "data_size": 0 00:23:52.330 }, 00:23:52.330 { 00:23:52.330 "name": "BaseBdev3", 00:23:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.330 "is_configured": false, 00:23:52.330 "data_offset": 0, 00:23:52.330 "data_size": 0 00:23:52.330 }, 00:23:52.330 { 00:23:52.330 "name": "BaseBdev4", 00:23:52.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.330 "is_configured": false, 00:23:52.330 "data_offset": 0, 00:23:52.330 "data_size": 0 00:23:52.330 } 00:23:52.330 ] 00:23:52.330 }' 00:23:52.330 16:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:52.330 16:40:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.898 16:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:53.155 [2024-07-24 16:40:49.834519] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:53.155 [2024-07-24 16:40:49.834559] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:23:53.155 16:40:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:53.413 [2024-07-24 16:40:50.051194] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:53.413 [2024-07-24 16:40:50.051242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:53.413 [2024-07-24 16:40:50.051256] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:53.413 [2024-07-24 16:40:50.051281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:53.413 [2024-07-24 16:40:50.051292] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:53.413 [2024-07-24 16:40:50.051309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:53.413 [2024-07-24 16:40:50.051324] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:53.413 [2024-07-24 16:40:50.051340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:53.413 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:53.672 [2024-07-24 16:40:50.325698] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:53.672 BaseBdev1 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:53.672 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:53.932 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:53.932 [ 00:23:53.932 { 00:23:53.932 "name": "BaseBdev1", 00:23:53.932 "aliases": [ 00:23:53.932 "d8026721-0c2a-4c73-8d85-5333954dba28" 00:23:53.932 ], 00:23:53.932 "product_name": "Malloc disk", 00:23:53.932 "block_size": 512, 00:23:53.932 "num_blocks": 65536, 00:23:53.932 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:23:53.932 "assigned_rate_limits": { 00:23:53.932 "rw_ios_per_sec": 0, 00:23:53.932 "rw_mbytes_per_sec": 0, 00:23:53.932 "r_mbytes_per_sec": 0, 00:23:53.932 "w_mbytes_per_sec": 0 00:23:53.932 }, 00:23:53.932 "claimed": true, 00:23:53.932 "claim_type": "exclusive_write", 00:23:53.932 "zoned": false, 00:23:53.932 "supported_io_types": { 00:23:53.932 "read": true, 00:23:53.932 "write": true, 00:23:53.932 "unmap": true, 00:23:53.932 "flush": true, 00:23:53.932 "reset": true, 00:23:53.932 "nvme_admin": false, 00:23:53.932 "nvme_io": false, 00:23:53.932 "nvme_io_md": false, 00:23:53.932 "write_zeroes": true, 00:23:53.932 "zcopy": true, 00:23:53.932 "get_zone_info": false, 00:23:53.932 "zone_management": false, 00:23:53.932 "zone_append": false, 00:23:53.932 "compare": false, 00:23:53.932 "compare_and_write": false, 00:23:53.932 "abort": true, 00:23:53.932 "seek_hole": false, 00:23:53.932 "seek_data": false, 00:23:53.932 "copy": true, 00:23:53.932 "nvme_iov_md": false 00:23:53.932 }, 00:23:53.932 "memory_domains": [ 00:23:53.932 { 00:23:53.932 "dma_device_id": "system", 00:23:53.932 "dma_device_type": 1 00:23:53.932 }, 00:23:53.932 { 00:23:53.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.932 "dma_device_type": 2 00:23:53.932 } 00:23:53.932 ], 00:23:53.932 "driver_specific": {} 00:23:53.932 } 00:23:53.932 ] 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.192 16:40:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.192 16:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.192 "name": "Existed_Raid", 00:23:54.192 "uuid": "d5b0939e-eba6-45ce-88d3-9d86c70d2b0a", 00:23:54.192 "strip_size_kb": 64, 00:23:54.192 "state": "configuring", 00:23:54.192 "raid_level": "concat", 00:23:54.192 "superblock": true, 00:23:54.192 "num_base_bdevs": 4, 00:23:54.192 "num_base_bdevs_discovered": 1, 00:23:54.192 "num_base_bdevs_operational": 4, 00:23:54.192 "base_bdevs_list": [ 00:23:54.192 { 00:23:54.192 "name": "BaseBdev1", 00:23:54.192 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:23:54.192 "is_configured": true, 00:23:54.192 "data_offset": 2048, 00:23:54.192 "data_size": 63488 00:23:54.192 }, 00:23:54.192 { 00:23:54.192 "name": "BaseBdev2", 00:23:54.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.192 "is_configured": false, 00:23:54.192 "data_offset": 0, 00:23:54.192 "data_size": 0 00:23:54.192 }, 00:23:54.192 { 00:23:54.192 "name": "BaseBdev3", 00:23:54.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.192 "is_configured": false, 00:23:54.192 "data_offset": 0, 00:23:54.192 "data_size": 0 00:23:54.192 }, 00:23:54.192 { 00:23:54.192 "name": "BaseBdev4", 00:23:54.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.192 "is_configured": false, 00:23:54.192 "data_offset": 0, 00:23:54.192 "data_size": 0 00:23:54.192 } 00:23:54.192 ] 00:23:54.192 }' 00:23:54.192 16:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.192 16:40:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:54.761 16:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:55.019 [2024-07-24 16:40:51.793693] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:55.019 [2024-07-24 16:40:51.793749] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:23:55.019 16:40:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:55.279 [2024-07-24 16:40:52.018411] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.279 [2024-07-24 16:40:52.020709] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:55.279 [2024-07-24 16:40:52.020753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:55.279 [2024-07-24 16:40:52.020768] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:55.279 [2024-07-24 16:40:52.020785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:55.279 [2024-07-24 16:40:52.020797] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:55.279 [2024-07-24 16:40:52.020816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.279 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.538 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.538 "name": "Existed_Raid", 00:23:55.538 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:23:55.538 "strip_size_kb": 64, 00:23:55.538 "state": "configuring", 00:23:55.538 "raid_level": "concat", 00:23:55.538 "superblock": true, 00:23:55.538 "num_base_bdevs": 4, 00:23:55.538 "num_base_bdevs_discovered": 1, 00:23:55.538 "num_base_bdevs_operational": 4, 00:23:55.538 "base_bdevs_list": [ 00:23:55.538 { 00:23:55.538 "name": "BaseBdev1", 00:23:55.538 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:23:55.538 "is_configured": true, 00:23:55.538 "data_offset": 2048, 00:23:55.538 "data_size": 63488 00:23:55.538 }, 00:23:55.538 { 00:23:55.538 "name": "BaseBdev2", 00:23:55.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.538 "is_configured": false, 00:23:55.538 "data_offset": 0, 00:23:55.538 "data_size": 0 00:23:55.538 }, 00:23:55.538 { 00:23:55.538 "name": "BaseBdev3", 00:23:55.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.538 "is_configured": false, 00:23:55.538 "data_offset": 0, 00:23:55.538 "data_size": 0 00:23:55.538 }, 00:23:55.538 { 00:23:55.538 "name": "BaseBdev4", 00:23:55.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.538 "is_configured": false, 00:23:55.538 "data_offset": 0, 00:23:55.538 "data_size": 0 00:23:55.538 } 00:23:55.538 ] 00:23:55.538 }' 00:23:55.538 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.538 16:40:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:56.107 16:40:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:56.366 [2024-07-24 16:40:53.102308] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.366 BaseBdev2 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:56.366 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:56.625 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:56.884 [ 00:23:56.884 { 00:23:56.884 "name": "BaseBdev2", 00:23:56.884 "aliases": [ 00:23:56.884 "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f" 00:23:56.884 ], 00:23:56.884 "product_name": "Malloc disk", 00:23:56.884 "block_size": 512, 00:23:56.884 "num_blocks": 65536, 00:23:56.884 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:23:56.884 "assigned_rate_limits": { 00:23:56.884 "rw_ios_per_sec": 0, 00:23:56.884 "rw_mbytes_per_sec": 0, 00:23:56.884 "r_mbytes_per_sec": 0, 00:23:56.884 "w_mbytes_per_sec": 0 00:23:56.884 }, 00:23:56.884 "claimed": true, 00:23:56.884 "claim_type": "exclusive_write", 00:23:56.884 "zoned": false, 00:23:56.884 "supported_io_types": { 00:23:56.884 "read": true, 00:23:56.884 "write": true, 00:23:56.884 "unmap": true, 00:23:56.884 "flush": true, 00:23:56.884 "reset": true, 00:23:56.884 "nvme_admin": false, 00:23:56.884 "nvme_io": false, 00:23:56.884 "nvme_io_md": false, 00:23:56.884 "write_zeroes": true, 00:23:56.884 "zcopy": true, 00:23:56.884 "get_zone_info": false, 00:23:56.884 "zone_management": false, 00:23:56.884 "zone_append": false, 00:23:56.884 "compare": false, 00:23:56.884 "compare_and_write": false, 00:23:56.884 "abort": true, 00:23:56.884 "seek_hole": false, 00:23:56.884 "seek_data": false, 00:23:56.884 "copy": true, 00:23:56.884 "nvme_iov_md": false 00:23:56.884 }, 00:23:56.884 "memory_domains": [ 00:23:56.884 { 00:23:56.884 "dma_device_id": "system", 00:23:56.884 "dma_device_type": 1 00:23:56.884 }, 00:23:56.884 { 00:23:56.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.884 "dma_device_type": 2 00:23:56.884 } 00:23:56.884 ], 00:23:56.884 "driver_specific": {} 00:23:56.884 } 00:23:56.884 ] 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.884 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.143 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.143 "name": "Existed_Raid", 00:23:57.143 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:23:57.143 "strip_size_kb": 64, 00:23:57.143 "state": "configuring", 00:23:57.143 "raid_level": "concat", 00:23:57.143 "superblock": true, 00:23:57.143 "num_base_bdevs": 4, 00:23:57.143 "num_base_bdevs_discovered": 2, 00:23:57.143 "num_base_bdevs_operational": 4, 00:23:57.143 "base_bdevs_list": [ 00:23:57.143 { 00:23:57.143 "name": "BaseBdev1", 00:23:57.143 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:23:57.143 "is_configured": true, 00:23:57.143 "data_offset": 2048, 00:23:57.143 "data_size": 63488 00:23:57.143 }, 00:23:57.143 { 00:23:57.143 "name": "BaseBdev2", 00:23:57.143 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:23:57.143 "is_configured": true, 00:23:57.143 "data_offset": 2048, 00:23:57.143 "data_size": 63488 00:23:57.143 }, 00:23:57.143 { 00:23:57.143 "name": "BaseBdev3", 00:23:57.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.143 "is_configured": false, 00:23:57.143 "data_offset": 0, 00:23:57.143 "data_size": 0 00:23:57.143 }, 00:23:57.143 { 00:23:57.143 "name": "BaseBdev4", 00:23:57.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.143 "is_configured": false, 00:23:57.143 "data_offset": 0, 00:23:57.143 "data_size": 0 00:23:57.143 } 00:23:57.143 ] 00:23:57.143 }' 00:23:57.143 16:40:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.143 16:40:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:57.712 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:57.971 [2024-07-24 16:40:54.627516] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:57.971 BaseBdev3 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:57.971 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:58.230 [ 00:23:58.230 { 00:23:58.230 "name": "BaseBdev3", 00:23:58.230 "aliases": [ 00:23:58.230 "800e3e7c-40c1-4cca-8b29-e30948d6e9b6" 00:23:58.230 ], 00:23:58.230 "product_name": "Malloc disk", 00:23:58.230 "block_size": 512, 00:23:58.230 "num_blocks": 65536, 00:23:58.230 "uuid": "800e3e7c-40c1-4cca-8b29-e30948d6e9b6", 00:23:58.230 "assigned_rate_limits": { 00:23:58.230 "rw_ios_per_sec": 0, 00:23:58.230 "rw_mbytes_per_sec": 0, 00:23:58.230 "r_mbytes_per_sec": 0, 00:23:58.230 "w_mbytes_per_sec": 0 00:23:58.230 }, 00:23:58.230 "claimed": true, 00:23:58.230 "claim_type": "exclusive_write", 00:23:58.230 "zoned": false, 00:23:58.230 "supported_io_types": { 00:23:58.230 "read": true, 00:23:58.230 "write": true, 00:23:58.230 "unmap": true, 00:23:58.230 "flush": true, 00:23:58.230 "reset": true, 00:23:58.230 "nvme_admin": false, 00:23:58.230 "nvme_io": false, 00:23:58.230 "nvme_io_md": false, 00:23:58.230 "write_zeroes": true, 00:23:58.230 "zcopy": true, 00:23:58.230 "get_zone_info": false, 00:23:58.230 "zone_management": false, 00:23:58.230 "zone_append": false, 00:23:58.230 "compare": false, 00:23:58.230 "compare_and_write": false, 00:23:58.230 "abort": true, 00:23:58.230 "seek_hole": false, 00:23:58.230 "seek_data": false, 00:23:58.230 "copy": true, 00:23:58.230 "nvme_iov_md": false 00:23:58.230 }, 00:23:58.230 "memory_domains": [ 00:23:58.230 { 00:23:58.230 "dma_device_id": "system", 00:23:58.230 "dma_device_type": 1 00:23:58.230 }, 00:23:58.230 { 00:23:58.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.230 "dma_device_type": 2 00:23:58.230 } 00:23:58.230 ], 00:23:58.230 "driver_specific": {} 00:23:58.230 } 00:23:58.230 ] 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.230 16:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.489 16:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.490 "name": "Existed_Raid", 00:23:58.490 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:23:58.490 "strip_size_kb": 64, 00:23:58.490 "state": "configuring", 00:23:58.490 "raid_level": "concat", 00:23:58.490 "superblock": true, 00:23:58.490 "num_base_bdevs": 4, 00:23:58.490 "num_base_bdevs_discovered": 3, 00:23:58.490 "num_base_bdevs_operational": 4, 00:23:58.490 "base_bdevs_list": [ 00:23:58.490 { 00:23:58.490 "name": "BaseBdev1", 00:23:58.490 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:23:58.490 "is_configured": true, 00:23:58.490 "data_offset": 2048, 00:23:58.490 "data_size": 63488 00:23:58.490 }, 00:23:58.490 { 00:23:58.490 "name": "BaseBdev2", 00:23:58.490 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:23:58.490 "is_configured": true, 00:23:58.490 "data_offset": 2048, 00:23:58.490 "data_size": 63488 00:23:58.490 }, 00:23:58.490 { 00:23:58.490 "name": "BaseBdev3", 00:23:58.490 "uuid": "800e3e7c-40c1-4cca-8b29-e30948d6e9b6", 00:23:58.490 "is_configured": true, 00:23:58.490 "data_offset": 2048, 00:23:58.490 "data_size": 63488 00:23:58.490 }, 00:23:58.490 { 00:23:58.490 "name": "BaseBdev4", 00:23:58.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.490 "is_configured": false, 00:23:58.490 "data_offset": 0, 00:23:58.490 "data_size": 0 00:23:58.490 } 00:23:58.490 ] 00:23:58.490 }' 00:23:58.490 16:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.490 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:59.058 16:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:59.317 [2024-07-24 16:40:55.931900] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:59.317 [2024-07-24 16:40:55.932189] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:23:59.317 [2024-07-24 16:40:55.932214] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:59.317 [2024-07-24 16:40:55.932538] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:59.317 [2024-07-24 16:40:55.932772] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:23:59.317 [2024-07-24 16:40:55.932790] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:23:59.317 BaseBdev4 00:23:59.318 [2024-07-24 16:40:55.932972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:59.318 16:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:59.318 16:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:59.577 [ 00:23:59.577 { 00:23:59.577 "name": "BaseBdev4", 00:23:59.577 "aliases": [ 00:23:59.577 "d6d26188-64c0-41a3-8a81-b8b7cdacbfad" 00:23:59.577 ], 00:23:59.577 "product_name": "Malloc disk", 00:23:59.577 "block_size": 512, 00:23:59.577 "num_blocks": 65536, 00:23:59.577 "uuid": "d6d26188-64c0-41a3-8a81-b8b7cdacbfad", 00:23:59.577 "assigned_rate_limits": { 00:23:59.577 "rw_ios_per_sec": 0, 00:23:59.577 "rw_mbytes_per_sec": 0, 00:23:59.577 "r_mbytes_per_sec": 0, 00:23:59.577 "w_mbytes_per_sec": 0 00:23:59.577 }, 00:23:59.577 "claimed": true, 00:23:59.577 "claim_type": "exclusive_write", 00:23:59.577 "zoned": false, 00:23:59.577 "supported_io_types": { 00:23:59.577 "read": true, 00:23:59.577 "write": true, 00:23:59.577 "unmap": true, 00:23:59.577 "flush": true, 00:23:59.577 "reset": true, 00:23:59.577 "nvme_admin": false, 00:23:59.577 "nvme_io": false, 00:23:59.577 "nvme_io_md": false, 00:23:59.577 "write_zeroes": true, 00:23:59.577 "zcopy": true, 00:23:59.577 "get_zone_info": false, 00:23:59.577 "zone_management": false, 00:23:59.577 "zone_append": false, 00:23:59.577 "compare": false, 00:23:59.577 "compare_and_write": false, 00:23:59.577 "abort": true, 00:23:59.577 "seek_hole": false, 00:23:59.577 "seek_data": false, 00:23:59.577 "copy": true, 00:23:59.577 "nvme_iov_md": false 00:23:59.577 }, 00:23:59.577 "memory_domains": [ 00:23:59.577 { 00:23:59.577 "dma_device_id": "system", 00:23:59.577 "dma_device_type": 1 00:23:59.577 }, 00:23:59.577 { 00:23:59.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.577 "dma_device_type": 2 00:23:59.577 } 00:23:59.577 ], 00:23:59.577 "driver_specific": {} 00:23:59.577 } 00:23:59.577 ] 00:23:59.577 16:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:23:59.577 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:59.577 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:59.577 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:59.577 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:59.577 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.578 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.836 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.836 "name": "Existed_Raid", 00:23:59.836 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:23:59.837 "strip_size_kb": 64, 00:23:59.837 "state": "online", 00:23:59.837 "raid_level": "concat", 00:23:59.837 "superblock": true, 00:23:59.837 "num_base_bdevs": 4, 00:23:59.837 "num_base_bdevs_discovered": 4, 00:23:59.837 "num_base_bdevs_operational": 4, 00:23:59.837 "base_bdevs_list": [ 00:23:59.837 { 00:23:59.837 "name": "BaseBdev1", 00:23:59.837 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:23:59.837 "is_configured": true, 00:23:59.837 "data_offset": 2048, 00:23:59.837 "data_size": 63488 00:23:59.837 }, 00:23:59.837 { 00:23:59.837 "name": "BaseBdev2", 00:23:59.837 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:23:59.837 "is_configured": true, 00:23:59.837 "data_offset": 2048, 00:23:59.837 "data_size": 63488 00:23:59.837 }, 00:23:59.837 { 00:23:59.837 "name": "BaseBdev3", 00:23:59.837 "uuid": "800e3e7c-40c1-4cca-8b29-e30948d6e9b6", 00:23:59.837 "is_configured": true, 00:23:59.837 "data_offset": 2048, 00:23:59.837 "data_size": 63488 00:23:59.837 }, 00:23:59.837 { 00:23:59.837 "name": "BaseBdev4", 00:23:59.837 "uuid": "d6d26188-64c0-41a3-8a81-b8b7cdacbfad", 00:23:59.837 "is_configured": true, 00:23:59.837 "data_offset": 2048, 00:23:59.837 "data_size": 63488 00:23:59.837 } 00:23:59.837 ] 00:23:59.837 }' 00:23:59.837 16:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.837 16:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:00.405 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:00.973 [2024-07-24 16:40:57.637211] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.973 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:00.973 "name": "Existed_Raid", 00:24:00.973 "aliases": [ 00:24:00.973 "49296a52-9b3c-4364-9139-85f678c3128d" 00:24:00.973 ], 00:24:00.973 "product_name": "Raid Volume", 00:24:00.973 "block_size": 512, 00:24:00.973 "num_blocks": 253952, 00:24:00.973 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:24:00.973 "assigned_rate_limits": { 00:24:00.973 "rw_ios_per_sec": 0, 00:24:00.973 "rw_mbytes_per_sec": 0, 00:24:00.973 "r_mbytes_per_sec": 0, 00:24:00.973 "w_mbytes_per_sec": 0 00:24:00.973 }, 00:24:00.973 "claimed": false, 00:24:00.973 "zoned": false, 00:24:00.973 "supported_io_types": { 00:24:00.973 "read": true, 00:24:00.973 "write": true, 00:24:00.973 "unmap": true, 00:24:00.973 "flush": true, 00:24:00.973 "reset": true, 00:24:00.973 "nvme_admin": false, 00:24:00.973 "nvme_io": false, 00:24:00.973 "nvme_io_md": false, 00:24:00.973 "write_zeroes": true, 00:24:00.973 "zcopy": false, 00:24:00.973 "get_zone_info": false, 00:24:00.973 "zone_management": false, 00:24:00.973 "zone_append": false, 00:24:00.973 "compare": false, 00:24:00.973 "compare_and_write": false, 00:24:00.973 "abort": false, 00:24:00.973 "seek_hole": false, 00:24:00.973 "seek_data": false, 00:24:00.973 "copy": false, 00:24:00.973 "nvme_iov_md": false 00:24:00.973 }, 00:24:00.973 "memory_domains": [ 00:24:00.973 { 00:24:00.973 "dma_device_id": "system", 00:24:00.973 "dma_device_type": 1 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.973 "dma_device_type": 2 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "system", 00:24:00.973 "dma_device_type": 1 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.973 "dma_device_type": 2 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "system", 00:24:00.973 "dma_device_type": 1 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.973 "dma_device_type": 2 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "system", 00:24:00.973 "dma_device_type": 1 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.973 "dma_device_type": 2 00:24:00.973 } 00:24:00.973 ], 00:24:00.973 "driver_specific": { 00:24:00.973 "raid": { 00:24:00.973 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:24:00.973 "strip_size_kb": 64, 00:24:00.973 "state": "online", 00:24:00.973 "raid_level": "concat", 00:24:00.973 "superblock": true, 00:24:00.973 "num_base_bdevs": 4, 00:24:00.973 "num_base_bdevs_discovered": 4, 00:24:00.973 "num_base_bdevs_operational": 4, 00:24:00.973 "base_bdevs_list": [ 00:24:00.973 { 00:24:00.973 "name": "BaseBdev1", 00:24:00.973 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:24:00.973 "is_configured": true, 00:24:00.973 "data_offset": 2048, 00:24:00.973 "data_size": 63488 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "name": "BaseBdev2", 00:24:00.973 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:24:00.973 "is_configured": true, 00:24:00.973 "data_offset": 2048, 00:24:00.973 "data_size": 63488 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "name": "BaseBdev3", 00:24:00.973 "uuid": "800e3e7c-40c1-4cca-8b29-e30948d6e9b6", 00:24:00.973 "is_configured": true, 00:24:00.973 "data_offset": 2048, 00:24:00.973 "data_size": 63488 00:24:00.973 }, 00:24:00.973 { 00:24:00.973 "name": "BaseBdev4", 00:24:00.973 "uuid": "d6d26188-64c0-41a3-8a81-b8b7cdacbfad", 00:24:00.973 "is_configured": true, 00:24:00.973 "data_offset": 2048, 00:24:00.973 "data_size": 63488 00:24:00.973 } 00:24:00.973 ] 00:24:00.973 } 00:24:00.973 } 00:24:00.973 }' 00:24:00.974 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:00.974 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:00.974 BaseBdev2 00:24:00.974 BaseBdev3 00:24:00.974 BaseBdev4' 00:24:00.974 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.974 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:00.974 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.233 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.233 "name": "BaseBdev1", 00:24:01.233 "aliases": [ 00:24:01.233 "d8026721-0c2a-4c73-8d85-5333954dba28" 00:24:01.233 ], 00:24:01.233 "product_name": "Malloc disk", 00:24:01.233 "block_size": 512, 00:24:01.233 "num_blocks": 65536, 00:24:01.233 "uuid": "d8026721-0c2a-4c73-8d85-5333954dba28", 00:24:01.233 "assigned_rate_limits": { 00:24:01.233 "rw_ios_per_sec": 0, 00:24:01.233 "rw_mbytes_per_sec": 0, 00:24:01.233 "r_mbytes_per_sec": 0, 00:24:01.233 "w_mbytes_per_sec": 0 00:24:01.233 }, 00:24:01.233 "claimed": true, 00:24:01.233 "claim_type": "exclusive_write", 00:24:01.233 "zoned": false, 00:24:01.233 "supported_io_types": { 00:24:01.233 "read": true, 00:24:01.233 "write": true, 00:24:01.233 "unmap": true, 00:24:01.233 "flush": true, 00:24:01.233 "reset": true, 00:24:01.233 "nvme_admin": false, 00:24:01.233 "nvme_io": false, 00:24:01.233 "nvme_io_md": false, 00:24:01.233 "write_zeroes": true, 00:24:01.233 "zcopy": true, 00:24:01.233 "get_zone_info": false, 00:24:01.233 "zone_management": false, 00:24:01.233 "zone_append": false, 00:24:01.233 "compare": false, 00:24:01.233 "compare_and_write": false, 00:24:01.233 "abort": true, 00:24:01.233 "seek_hole": false, 00:24:01.233 "seek_data": false, 00:24:01.233 "copy": true, 00:24:01.233 "nvme_iov_md": false 00:24:01.233 }, 00:24:01.233 "memory_domains": [ 00:24:01.233 { 00:24:01.233 "dma_device_id": "system", 00:24:01.233 "dma_device_type": 1 00:24:01.233 }, 00:24:01.233 { 00:24:01.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.233 "dma_device_type": 2 00:24:01.233 } 00:24:01.233 ], 00:24:01.233 "driver_specific": {} 00:24:01.233 }' 00:24:01.233 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.233 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.233 16:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.233 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.233 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.233 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.233 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:01.492 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.751 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.751 "name": "BaseBdev2", 00:24:01.751 "aliases": [ 00:24:01.751 "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f" 00:24:01.751 ], 00:24:01.751 "product_name": "Malloc disk", 00:24:01.751 "block_size": 512, 00:24:01.751 "num_blocks": 65536, 00:24:01.751 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:24:01.751 "assigned_rate_limits": { 00:24:01.751 "rw_ios_per_sec": 0, 00:24:01.751 "rw_mbytes_per_sec": 0, 00:24:01.751 "r_mbytes_per_sec": 0, 00:24:01.752 "w_mbytes_per_sec": 0 00:24:01.752 }, 00:24:01.752 "claimed": true, 00:24:01.752 "claim_type": "exclusive_write", 00:24:01.752 "zoned": false, 00:24:01.752 "supported_io_types": { 00:24:01.752 "read": true, 00:24:01.752 "write": true, 00:24:01.752 "unmap": true, 00:24:01.752 "flush": true, 00:24:01.752 "reset": true, 00:24:01.752 "nvme_admin": false, 00:24:01.752 "nvme_io": false, 00:24:01.752 "nvme_io_md": false, 00:24:01.752 "write_zeroes": true, 00:24:01.752 "zcopy": true, 00:24:01.752 "get_zone_info": false, 00:24:01.752 "zone_management": false, 00:24:01.752 "zone_append": false, 00:24:01.752 "compare": false, 00:24:01.752 "compare_and_write": false, 00:24:01.752 "abort": true, 00:24:01.752 "seek_hole": false, 00:24:01.752 "seek_data": false, 00:24:01.752 "copy": true, 00:24:01.752 "nvme_iov_md": false 00:24:01.752 }, 00:24:01.752 "memory_domains": [ 00:24:01.752 { 00:24:01.752 "dma_device_id": "system", 00:24:01.752 "dma_device_type": 1 00:24:01.752 }, 00:24:01.752 { 00:24:01.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.752 "dma_device_type": 2 00:24:01.752 } 00:24:01.752 ], 00:24:01.752 "driver_specific": {} 00:24:01.752 }' 00:24:01.752 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.752 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.752 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.752 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:02.011 16:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:02.270 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:02.270 "name": "BaseBdev3", 00:24:02.270 "aliases": [ 00:24:02.270 "800e3e7c-40c1-4cca-8b29-e30948d6e9b6" 00:24:02.270 ], 00:24:02.270 "product_name": "Malloc disk", 00:24:02.270 "block_size": 512, 00:24:02.270 "num_blocks": 65536, 00:24:02.270 "uuid": "800e3e7c-40c1-4cca-8b29-e30948d6e9b6", 00:24:02.270 "assigned_rate_limits": { 00:24:02.270 "rw_ios_per_sec": 0, 00:24:02.270 "rw_mbytes_per_sec": 0, 00:24:02.270 "r_mbytes_per_sec": 0, 00:24:02.270 "w_mbytes_per_sec": 0 00:24:02.270 }, 00:24:02.270 "claimed": true, 00:24:02.270 "claim_type": "exclusive_write", 00:24:02.270 "zoned": false, 00:24:02.270 "supported_io_types": { 00:24:02.270 "read": true, 00:24:02.270 "write": true, 00:24:02.270 "unmap": true, 00:24:02.270 "flush": true, 00:24:02.270 "reset": true, 00:24:02.270 "nvme_admin": false, 00:24:02.270 "nvme_io": false, 00:24:02.270 "nvme_io_md": false, 00:24:02.270 "write_zeroes": true, 00:24:02.270 "zcopy": true, 00:24:02.270 "get_zone_info": false, 00:24:02.270 "zone_management": false, 00:24:02.270 "zone_append": false, 00:24:02.270 "compare": false, 00:24:02.270 "compare_and_write": false, 00:24:02.270 "abort": true, 00:24:02.270 "seek_hole": false, 00:24:02.270 "seek_data": false, 00:24:02.270 "copy": true, 00:24:02.270 "nvme_iov_md": false 00:24:02.270 }, 00:24:02.270 "memory_domains": [ 00:24:02.270 { 00:24:02.270 "dma_device_id": "system", 00:24:02.270 "dma_device_type": 1 00:24:02.270 }, 00:24:02.270 { 00:24:02.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.270 "dma_device_type": 2 00:24:02.270 } 00:24:02.270 ], 00:24:02.270 "driver_specific": {} 00:24:02.270 }' 00:24:02.270 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.270 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.270 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:02.270 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:02.529 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:02.788 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:02.788 "name": "BaseBdev4", 00:24:02.788 "aliases": [ 00:24:02.788 "d6d26188-64c0-41a3-8a81-b8b7cdacbfad" 00:24:02.788 ], 00:24:02.788 "product_name": "Malloc disk", 00:24:02.788 "block_size": 512, 00:24:02.788 "num_blocks": 65536, 00:24:02.788 "uuid": "d6d26188-64c0-41a3-8a81-b8b7cdacbfad", 00:24:02.788 "assigned_rate_limits": { 00:24:02.788 "rw_ios_per_sec": 0, 00:24:02.788 "rw_mbytes_per_sec": 0, 00:24:02.788 "r_mbytes_per_sec": 0, 00:24:02.788 "w_mbytes_per_sec": 0 00:24:02.788 }, 00:24:02.788 "claimed": true, 00:24:02.788 "claim_type": "exclusive_write", 00:24:02.788 "zoned": false, 00:24:02.788 "supported_io_types": { 00:24:02.788 "read": true, 00:24:02.788 "write": true, 00:24:02.788 "unmap": true, 00:24:02.788 "flush": true, 00:24:02.788 "reset": true, 00:24:02.788 "nvme_admin": false, 00:24:02.788 "nvme_io": false, 00:24:02.788 "nvme_io_md": false, 00:24:02.788 "write_zeroes": true, 00:24:02.788 "zcopy": true, 00:24:02.788 "get_zone_info": false, 00:24:02.788 "zone_management": false, 00:24:02.788 "zone_append": false, 00:24:02.788 "compare": false, 00:24:02.788 "compare_and_write": false, 00:24:02.788 "abort": true, 00:24:02.788 "seek_hole": false, 00:24:02.788 "seek_data": false, 00:24:02.788 "copy": true, 00:24:02.788 "nvme_iov_md": false 00:24:02.788 }, 00:24:02.788 "memory_domains": [ 00:24:02.788 { 00:24:02.788 "dma_device_id": "system", 00:24:02.788 "dma_device_type": 1 00:24:02.788 }, 00:24:02.788 { 00:24:02.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.788 "dma_device_type": 2 00:24:02.788 } 00:24:02.788 ], 00:24:02.788 "driver_specific": {} 00:24:02.788 }' 00:24:02.788 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.788 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:03.047 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:03.048 16:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:03.307 [2024-07-24 16:41:00.115558] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.307 [2024-07-24 16:41:00.115592] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:03.307 [2024-07-24 16:41:00.115648] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.566 "name": "Existed_Raid", 00:24:03.566 "uuid": "49296a52-9b3c-4364-9139-85f678c3128d", 00:24:03.566 "strip_size_kb": 64, 00:24:03.566 "state": "offline", 00:24:03.566 "raid_level": "concat", 00:24:03.566 "superblock": true, 00:24:03.566 "num_base_bdevs": 4, 00:24:03.566 "num_base_bdevs_discovered": 3, 00:24:03.566 "num_base_bdevs_operational": 3, 00:24:03.566 "base_bdevs_list": [ 00:24:03.566 { 00:24:03.566 "name": null, 00:24:03.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.566 "is_configured": false, 00:24:03.566 "data_offset": 2048, 00:24:03.566 "data_size": 63488 00:24:03.566 }, 00:24:03.566 { 00:24:03.566 "name": "BaseBdev2", 00:24:03.566 "uuid": "a576c8da-ffad-4b8f-9b6d-0c8b1813fc2f", 00:24:03.566 "is_configured": true, 00:24:03.566 "data_offset": 2048, 00:24:03.566 "data_size": 63488 00:24:03.566 }, 00:24:03.566 { 00:24:03.566 "name": "BaseBdev3", 00:24:03.566 "uuid": "800e3e7c-40c1-4cca-8b29-e30948d6e9b6", 00:24:03.566 "is_configured": true, 00:24:03.566 "data_offset": 2048, 00:24:03.566 "data_size": 63488 00:24:03.566 }, 00:24:03.566 { 00:24:03.566 "name": "BaseBdev4", 00:24:03.566 "uuid": "d6d26188-64c0-41a3-8a81-b8b7cdacbfad", 00:24:03.566 "is_configured": true, 00:24:03.566 "data_offset": 2048, 00:24:03.566 "data_size": 63488 00:24:03.566 } 00:24:03.566 ] 00:24:03.566 }' 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.566 16:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:04.178 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:04.178 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:04.178 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.178 16:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:04.437 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:04.437 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.437 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:04.697 [2024-07-24 16:41:01.356754] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.697 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:04.697 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:04.697 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.697 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:04.956 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:04.956 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.956 16:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:05.215 [2024-07-24 16:41:01.949513] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:05.474 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:05.474 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:05.474 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.474 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:05.733 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:05.733 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:05.733 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:05.733 [2024-07-24 16:41:02.557932] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:05.733 [2024-07-24 16:41:02.557989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:24:05.992 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:05.992 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:05.992 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.992 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:06.251 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:06.251 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:06.251 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:06.251 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:06.251 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:06.251 16:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:06.510 BaseBdev2 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:06.510 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.769 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:07.028 [ 00:24:07.028 { 00:24:07.028 "name": "BaseBdev2", 00:24:07.028 "aliases": [ 00:24:07.028 "859bb990-74fd-452c-bb9b-cd7a3416bb53" 00:24:07.028 ], 00:24:07.028 "product_name": "Malloc disk", 00:24:07.028 "block_size": 512, 00:24:07.028 "num_blocks": 65536, 00:24:07.028 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:07.028 "assigned_rate_limits": { 00:24:07.028 "rw_ios_per_sec": 0, 00:24:07.028 "rw_mbytes_per_sec": 0, 00:24:07.028 "r_mbytes_per_sec": 0, 00:24:07.028 "w_mbytes_per_sec": 0 00:24:07.028 }, 00:24:07.028 "claimed": false, 00:24:07.028 "zoned": false, 00:24:07.028 "supported_io_types": { 00:24:07.028 "read": true, 00:24:07.028 "write": true, 00:24:07.028 "unmap": true, 00:24:07.028 "flush": true, 00:24:07.028 "reset": true, 00:24:07.028 "nvme_admin": false, 00:24:07.028 "nvme_io": false, 00:24:07.028 "nvme_io_md": false, 00:24:07.028 "write_zeroes": true, 00:24:07.028 "zcopy": true, 00:24:07.028 "get_zone_info": false, 00:24:07.028 "zone_management": false, 00:24:07.028 "zone_append": false, 00:24:07.028 "compare": false, 00:24:07.028 "compare_and_write": false, 00:24:07.028 "abort": true, 00:24:07.028 "seek_hole": false, 00:24:07.028 "seek_data": false, 00:24:07.028 "copy": true, 00:24:07.028 "nvme_iov_md": false 00:24:07.028 }, 00:24:07.028 "memory_domains": [ 00:24:07.028 { 00:24:07.028 "dma_device_id": "system", 00:24:07.028 "dma_device_type": 1 00:24:07.028 }, 00:24:07.028 { 00:24:07.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.028 "dma_device_type": 2 00:24:07.028 } 00:24:07.028 ], 00:24:07.028 "driver_specific": {} 00:24:07.028 } 00:24:07.028 ] 00:24:07.028 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:07.028 16:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:07.028 16:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:07.028 16:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:07.287 BaseBdev3 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:07.287 16:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:07.546 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:07.546 [ 00:24:07.546 { 00:24:07.546 "name": "BaseBdev3", 00:24:07.546 "aliases": [ 00:24:07.546 "7fae90c7-d72e-431e-ba34-2df483e6e2a4" 00:24:07.546 ], 00:24:07.546 "product_name": "Malloc disk", 00:24:07.546 "block_size": 512, 00:24:07.546 "num_blocks": 65536, 00:24:07.546 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:07.546 "assigned_rate_limits": { 00:24:07.546 "rw_ios_per_sec": 0, 00:24:07.546 "rw_mbytes_per_sec": 0, 00:24:07.546 "r_mbytes_per_sec": 0, 00:24:07.546 "w_mbytes_per_sec": 0 00:24:07.546 }, 00:24:07.546 "claimed": false, 00:24:07.546 "zoned": false, 00:24:07.546 "supported_io_types": { 00:24:07.546 "read": true, 00:24:07.546 "write": true, 00:24:07.546 "unmap": true, 00:24:07.546 "flush": true, 00:24:07.546 "reset": true, 00:24:07.546 "nvme_admin": false, 00:24:07.546 "nvme_io": false, 00:24:07.546 "nvme_io_md": false, 00:24:07.546 "write_zeroes": true, 00:24:07.546 "zcopy": true, 00:24:07.546 "get_zone_info": false, 00:24:07.546 "zone_management": false, 00:24:07.546 "zone_append": false, 00:24:07.546 "compare": false, 00:24:07.546 "compare_and_write": false, 00:24:07.546 "abort": true, 00:24:07.546 "seek_hole": false, 00:24:07.546 "seek_data": false, 00:24:07.546 "copy": true, 00:24:07.546 "nvme_iov_md": false 00:24:07.546 }, 00:24:07.546 "memory_domains": [ 00:24:07.547 { 00:24:07.547 "dma_device_id": "system", 00:24:07.547 "dma_device_type": 1 00:24:07.547 }, 00:24:07.547 { 00:24:07.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.547 "dma_device_type": 2 00:24:07.547 } 00:24:07.547 ], 00:24:07.547 "driver_specific": {} 00:24:07.547 } 00:24:07.547 ] 00:24:07.806 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:07.806 16:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:07.806 16:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:07.806 16:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:08.064 BaseBdev4 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:08.064 16:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:08.323 [ 00:24:08.323 { 00:24:08.323 "name": "BaseBdev4", 00:24:08.323 "aliases": [ 00:24:08.323 "33627187-2071-41f6-85c7-8d71f1f1c0c5" 00:24:08.323 ], 00:24:08.323 "product_name": "Malloc disk", 00:24:08.323 "block_size": 512, 00:24:08.323 "num_blocks": 65536, 00:24:08.323 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:08.323 "assigned_rate_limits": { 00:24:08.323 "rw_ios_per_sec": 0, 00:24:08.323 "rw_mbytes_per_sec": 0, 00:24:08.323 "r_mbytes_per_sec": 0, 00:24:08.323 "w_mbytes_per_sec": 0 00:24:08.323 }, 00:24:08.323 "claimed": false, 00:24:08.323 "zoned": false, 00:24:08.323 "supported_io_types": { 00:24:08.323 "read": true, 00:24:08.323 "write": true, 00:24:08.323 "unmap": true, 00:24:08.323 "flush": true, 00:24:08.323 "reset": true, 00:24:08.323 "nvme_admin": false, 00:24:08.323 "nvme_io": false, 00:24:08.323 "nvme_io_md": false, 00:24:08.323 "write_zeroes": true, 00:24:08.323 "zcopy": true, 00:24:08.323 "get_zone_info": false, 00:24:08.323 "zone_management": false, 00:24:08.323 "zone_append": false, 00:24:08.323 "compare": false, 00:24:08.323 "compare_and_write": false, 00:24:08.323 "abort": true, 00:24:08.323 "seek_hole": false, 00:24:08.323 "seek_data": false, 00:24:08.323 "copy": true, 00:24:08.323 "nvme_iov_md": false 00:24:08.323 }, 00:24:08.323 "memory_domains": [ 00:24:08.323 { 00:24:08.323 "dma_device_id": "system", 00:24:08.323 "dma_device_type": 1 00:24:08.323 }, 00:24:08.323 { 00:24:08.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.323 "dma_device_type": 2 00:24:08.323 } 00:24:08.323 ], 00:24:08.323 "driver_specific": {} 00:24:08.323 } 00:24:08.323 ] 00:24:08.323 16:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:08.323 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:08.323 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:08.323 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:08.582 [2024-07-24 16:41:05.350372] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:08.582 [2024-07-24 16:41:05.350418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:08.582 [2024-07-24 16:41:05.350449] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:08.582 [2024-07-24 16:41:05.352765] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:08.582 [2024-07-24 16:41:05.352826] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.582 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:08.841 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.841 "name": "Existed_Raid", 00:24:08.841 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:08.841 "strip_size_kb": 64, 00:24:08.841 "state": "configuring", 00:24:08.841 "raid_level": "concat", 00:24:08.841 "superblock": true, 00:24:08.841 "num_base_bdevs": 4, 00:24:08.841 "num_base_bdevs_discovered": 3, 00:24:08.841 "num_base_bdevs_operational": 4, 00:24:08.841 "base_bdevs_list": [ 00:24:08.841 { 00:24:08.841 "name": "BaseBdev1", 00:24:08.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.841 "is_configured": false, 00:24:08.841 "data_offset": 0, 00:24:08.841 "data_size": 0 00:24:08.841 }, 00:24:08.841 { 00:24:08.841 "name": "BaseBdev2", 00:24:08.841 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:08.841 "is_configured": true, 00:24:08.841 "data_offset": 2048, 00:24:08.841 "data_size": 63488 00:24:08.841 }, 00:24:08.841 { 00:24:08.841 "name": "BaseBdev3", 00:24:08.841 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:08.841 "is_configured": true, 00:24:08.841 "data_offset": 2048, 00:24:08.841 "data_size": 63488 00:24:08.841 }, 00:24:08.841 { 00:24:08.841 "name": "BaseBdev4", 00:24:08.841 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:08.841 "is_configured": true, 00:24:08.841 "data_offset": 2048, 00:24:08.841 "data_size": 63488 00:24:08.841 } 00:24:08.841 ] 00:24:08.841 }' 00:24:08.841 16:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.841 16:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:09.409 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:09.668 [2024-07-24 16:41:06.328953] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.668 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.927 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.927 "name": "Existed_Raid", 00:24:09.927 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:09.927 "strip_size_kb": 64, 00:24:09.927 "state": "configuring", 00:24:09.927 "raid_level": "concat", 00:24:09.927 "superblock": true, 00:24:09.927 "num_base_bdevs": 4, 00:24:09.927 "num_base_bdevs_discovered": 2, 00:24:09.927 "num_base_bdevs_operational": 4, 00:24:09.927 "base_bdevs_list": [ 00:24:09.927 { 00:24:09.927 "name": "BaseBdev1", 00:24:09.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.927 "is_configured": false, 00:24:09.927 "data_offset": 0, 00:24:09.927 "data_size": 0 00:24:09.927 }, 00:24:09.927 { 00:24:09.927 "name": null, 00:24:09.927 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:09.927 "is_configured": false, 00:24:09.927 "data_offset": 2048, 00:24:09.927 "data_size": 63488 00:24:09.927 }, 00:24:09.927 { 00:24:09.927 "name": "BaseBdev3", 00:24:09.927 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:09.927 "is_configured": true, 00:24:09.927 "data_offset": 2048, 00:24:09.927 "data_size": 63488 00:24:09.927 }, 00:24:09.927 { 00:24:09.927 "name": "BaseBdev4", 00:24:09.927 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:09.927 "is_configured": true, 00:24:09.927 "data_offset": 2048, 00:24:09.927 "data_size": 63488 00:24:09.927 } 00:24:09.927 ] 00:24:09.927 }' 00:24:09.927 16:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.927 16:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.496 16:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.496 16:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:10.496 16:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:10.496 16:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:10.755 [2024-07-24 16:41:07.586538] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:10.755 BaseBdev1 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:10.755 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:11.014 16:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:11.273 [ 00:24:11.273 { 00:24:11.273 "name": "BaseBdev1", 00:24:11.273 "aliases": [ 00:24:11.273 "9097445f-9163-49f1-839d-b9f8baba190d" 00:24:11.273 ], 00:24:11.273 "product_name": "Malloc disk", 00:24:11.273 "block_size": 512, 00:24:11.273 "num_blocks": 65536, 00:24:11.273 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:11.273 "assigned_rate_limits": { 00:24:11.273 "rw_ios_per_sec": 0, 00:24:11.273 "rw_mbytes_per_sec": 0, 00:24:11.273 "r_mbytes_per_sec": 0, 00:24:11.273 "w_mbytes_per_sec": 0 00:24:11.273 }, 00:24:11.273 "claimed": true, 00:24:11.273 "claim_type": "exclusive_write", 00:24:11.273 "zoned": false, 00:24:11.273 "supported_io_types": { 00:24:11.273 "read": true, 00:24:11.273 "write": true, 00:24:11.273 "unmap": true, 00:24:11.273 "flush": true, 00:24:11.273 "reset": true, 00:24:11.273 "nvme_admin": false, 00:24:11.273 "nvme_io": false, 00:24:11.273 "nvme_io_md": false, 00:24:11.273 "write_zeroes": true, 00:24:11.273 "zcopy": true, 00:24:11.273 "get_zone_info": false, 00:24:11.273 "zone_management": false, 00:24:11.273 "zone_append": false, 00:24:11.273 "compare": false, 00:24:11.273 "compare_and_write": false, 00:24:11.273 "abort": true, 00:24:11.273 "seek_hole": false, 00:24:11.273 "seek_data": false, 00:24:11.273 "copy": true, 00:24:11.273 "nvme_iov_md": false 00:24:11.273 }, 00:24:11.273 "memory_domains": [ 00:24:11.273 { 00:24:11.273 "dma_device_id": "system", 00:24:11.273 "dma_device_type": 1 00:24:11.273 }, 00:24:11.273 { 00:24:11.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.273 "dma_device_type": 2 00:24:11.273 } 00:24:11.273 ], 00:24:11.273 "driver_specific": {} 00:24:11.273 } 00:24:11.273 ] 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.273 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.531 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.531 "name": "Existed_Raid", 00:24:11.531 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:11.531 "strip_size_kb": 64, 00:24:11.531 "state": "configuring", 00:24:11.531 "raid_level": "concat", 00:24:11.531 "superblock": true, 00:24:11.531 "num_base_bdevs": 4, 00:24:11.531 "num_base_bdevs_discovered": 3, 00:24:11.531 "num_base_bdevs_operational": 4, 00:24:11.531 "base_bdevs_list": [ 00:24:11.531 { 00:24:11.531 "name": "BaseBdev1", 00:24:11.531 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:11.531 "is_configured": true, 00:24:11.531 "data_offset": 2048, 00:24:11.531 "data_size": 63488 00:24:11.531 }, 00:24:11.531 { 00:24:11.531 "name": null, 00:24:11.531 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:11.531 "is_configured": false, 00:24:11.531 "data_offset": 2048, 00:24:11.531 "data_size": 63488 00:24:11.531 }, 00:24:11.531 { 00:24:11.531 "name": "BaseBdev3", 00:24:11.531 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:11.531 "is_configured": true, 00:24:11.531 "data_offset": 2048, 00:24:11.531 "data_size": 63488 00:24:11.531 }, 00:24:11.531 { 00:24:11.531 "name": "BaseBdev4", 00:24:11.531 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:11.531 "is_configured": true, 00:24:11.531 "data_offset": 2048, 00:24:11.531 "data_size": 63488 00:24:11.531 } 00:24:11.531 ] 00:24:11.531 }' 00:24:11.531 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.531 16:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.098 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.098 16:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:12.665 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:12.665 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:12.924 [2024-07-24 16:41:09.564062] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.924 "name": "Existed_Raid", 00:24:12.924 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:12.924 "strip_size_kb": 64, 00:24:12.924 "state": "configuring", 00:24:12.924 "raid_level": "concat", 00:24:12.924 "superblock": true, 00:24:12.924 "num_base_bdevs": 4, 00:24:12.924 "num_base_bdevs_discovered": 2, 00:24:12.924 "num_base_bdevs_operational": 4, 00:24:12.924 "base_bdevs_list": [ 00:24:12.924 { 00:24:12.924 "name": "BaseBdev1", 00:24:12.924 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:12.924 "is_configured": true, 00:24:12.924 "data_offset": 2048, 00:24:12.924 "data_size": 63488 00:24:12.924 }, 00:24:12.924 { 00:24:12.924 "name": null, 00:24:12.924 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:12.924 "is_configured": false, 00:24:12.924 "data_offset": 2048, 00:24:12.924 "data_size": 63488 00:24:12.924 }, 00:24:12.924 { 00:24:12.924 "name": null, 00:24:12.924 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:12.924 "is_configured": false, 00:24:12.924 "data_offset": 2048, 00:24:12.924 "data_size": 63488 00:24:12.924 }, 00:24:12.924 { 00:24:12.924 "name": "BaseBdev4", 00:24:12.924 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:12.924 "is_configured": true, 00:24:12.924 "data_offset": 2048, 00:24:12.924 "data_size": 63488 00:24:12.924 } 00:24:12.924 ] 00:24:12.924 }' 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.924 16:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.860 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.860 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:14.119 [2024-07-24 16:41:10.959841] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.119 16:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.378 16:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.378 "name": "Existed_Raid", 00:24:14.378 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:14.378 "strip_size_kb": 64, 00:24:14.378 "state": "configuring", 00:24:14.378 "raid_level": "concat", 00:24:14.378 "superblock": true, 00:24:14.378 "num_base_bdevs": 4, 00:24:14.378 "num_base_bdevs_discovered": 3, 00:24:14.378 "num_base_bdevs_operational": 4, 00:24:14.378 "base_bdevs_list": [ 00:24:14.378 { 00:24:14.378 "name": "BaseBdev1", 00:24:14.378 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:14.378 "is_configured": true, 00:24:14.378 "data_offset": 2048, 00:24:14.378 "data_size": 63488 00:24:14.378 }, 00:24:14.378 { 00:24:14.378 "name": null, 00:24:14.378 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:14.378 "is_configured": false, 00:24:14.378 "data_offset": 2048, 00:24:14.378 "data_size": 63488 00:24:14.378 }, 00:24:14.378 { 00:24:14.378 "name": "BaseBdev3", 00:24:14.378 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:14.378 "is_configured": true, 00:24:14.378 "data_offset": 2048, 00:24:14.378 "data_size": 63488 00:24:14.378 }, 00:24:14.378 { 00:24:14.378 "name": "BaseBdev4", 00:24:14.378 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:14.378 "is_configured": true, 00:24:14.378 "data_offset": 2048, 00:24:14.378 "data_size": 63488 00:24:14.378 } 00:24:14.378 ] 00:24:14.378 }' 00:24:14.378 16:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.378 16:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:14.946 16:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.946 16:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:15.204 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:15.204 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:15.462 [2024-07-24 16:41:12.203264] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.720 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.720 "name": "Existed_Raid", 00:24:15.720 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:15.720 "strip_size_kb": 64, 00:24:15.720 "state": "configuring", 00:24:15.720 "raid_level": "concat", 00:24:15.720 "superblock": true, 00:24:15.720 "num_base_bdevs": 4, 00:24:15.720 "num_base_bdevs_discovered": 2, 00:24:15.720 "num_base_bdevs_operational": 4, 00:24:15.720 "base_bdevs_list": [ 00:24:15.720 { 00:24:15.720 "name": null, 00:24:15.720 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:15.720 "is_configured": false, 00:24:15.720 "data_offset": 2048, 00:24:15.720 "data_size": 63488 00:24:15.721 }, 00:24:15.721 { 00:24:15.721 "name": null, 00:24:15.721 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:15.721 "is_configured": false, 00:24:15.721 "data_offset": 2048, 00:24:15.721 "data_size": 63488 00:24:15.721 }, 00:24:15.721 { 00:24:15.721 "name": "BaseBdev3", 00:24:15.721 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:15.721 "is_configured": true, 00:24:15.721 "data_offset": 2048, 00:24:15.721 "data_size": 63488 00:24:15.721 }, 00:24:15.721 { 00:24:15.721 "name": "BaseBdev4", 00:24:15.721 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:15.721 "is_configured": true, 00:24:15.721 "data_offset": 2048, 00:24:15.721 "data_size": 63488 00:24:15.721 } 00:24:15.721 ] 00:24:15.721 }' 00:24:15.721 16:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.721 16:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.287 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.287 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:16.546 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:16.546 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:16.805 [2024-07-24 16:41:13.530567] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.805 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.102 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.102 "name": "Existed_Raid", 00:24:17.102 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:17.102 "strip_size_kb": 64, 00:24:17.102 "state": "configuring", 00:24:17.102 "raid_level": "concat", 00:24:17.102 "superblock": true, 00:24:17.102 "num_base_bdevs": 4, 00:24:17.102 "num_base_bdevs_discovered": 3, 00:24:17.102 "num_base_bdevs_operational": 4, 00:24:17.102 "base_bdevs_list": [ 00:24:17.102 { 00:24:17.102 "name": null, 00:24:17.102 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:17.102 "is_configured": false, 00:24:17.102 "data_offset": 2048, 00:24:17.102 "data_size": 63488 00:24:17.102 }, 00:24:17.102 { 00:24:17.102 "name": "BaseBdev2", 00:24:17.102 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:17.102 "is_configured": true, 00:24:17.102 "data_offset": 2048, 00:24:17.102 "data_size": 63488 00:24:17.102 }, 00:24:17.102 { 00:24:17.102 "name": "BaseBdev3", 00:24:17.102 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:17.102 "is_configured": true, 00:24:17.102 "data_offset": 2048, 00:24:17.102 "data_size": 63488 00:24:17.102 }, 00:24:17.102 { 00:24:17.102 "name": "BaseBdev4", 00:24:17.102 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:17.102 "is_configured": true, 00:24:17.102 "data_offset": 2048, 00:24:17.102 "data_size": 63488 00:24:17.102 } 00:24:17.102 ] 00:24:17.102 }' 00:24:17.102 16:41:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.102 16:41:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:17.698 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.698 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:17.698 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:17.698 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.698 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:17.956 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9097445f-9163-49f1-839d-b9f8baba190d 00:24:18.215 [2024-07-24 16:41:14.881119] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:18.215 [2024-07-24 16:41:14.881387] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:24:18.215 [2024-07-24 16:41:14.881406] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:18.215 [2024-07-24 16:41:14.881728] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:24:18.215 NewBaseBdev 00:24:18.215 [2024-07-24 16:41:14.881944] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:24:18.215 [2024-07-24 16:41:14.881961] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:24:18.215 [2024-07-24 16:41:14.882128] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:18.215 16:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:18.215 16:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:18.784 [ 00:24:18.784 { 00:24:18.784 "name": "NewBaseBdev", 00:24:18.784 "aliases": [ 00:24:18.784 "9097445f-9163-49f1-839d-b9f8baba190d" 00:24:18.784 ], 00:24:18.784 "product_name": "Malloc disk", 00:24:18.784 "block_size": 512, 00:24:18.784 "num_blocks": 65536, 00:24:18.784 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:18.784 "assigned_rate_limits": { 00:24:18.784 "rw_ios_per_sec": 0, 00:24:18.784 "rw_mbytes_per_sec": 0, 00:24:18.784 "r_mbytes_per_sec": 0, 00:24:18.784 "w_mbytes_per_sec": 0 00:24:18.784 }, 00:24:18.784 "claimed": true, 00:24:18.784 "claim_type": "exclusive_write", 00:24:18.784 "zoned": false, 00:24:18.784 "supported_io_types": { 00:24:18.784 "read": true, 00:24:18.784 "write": true, 00:24:18.784 "unmap": true, 00:24:18.784 "flush": true, 00:24:18.784 "reset": true, 00:24:18.784 "nvme_admin": false, 00:24:18.784 "nvme_io": false, 00:24:18.784 "nvme_io_md": false, 00:24:18.784 "write_zeroes": true, 00:24:18.784 "zcopy": true, 00:24:18.784 "get_zone_info": false, 00:24:18.784 "zone_management": false, 00:24:18.784 "zone_append": false, 00:24:18.784 "compare": false, 00:24:18.784 "compare_and_write": false, 00:24:18.784 "abort": true, 00:24:18.784 "seek_hole": false, 00:24:18.784 "seek_data": false, 00:24:18.784 "copy": true, 00:24:18.784 "nvme_iov_md": false 00:24:18.784 }, 00:24:18.784 "memory_domains": [ 00:24:18.784 { 00:24:18.784 "dma_device_id": "system", 00:24:18.784 "dma_device_type": 1 00:24:18.784 }, 00:24:18.784 { 00:24:18.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.784 "dma_device_type": 2 00:24:18.784 } 00:24:18.784 ], 00:24:18.784 "driver_specific": {} 00:24:18.784 } 00:24:18.784 ] 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.784 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:19.044 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:19.044 "name": "Existed_Raid", 00:24:19.044 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:19.044 "strip_size_kb": 64, 00:24:19.044 "state": "online", 00:24:19.044 "raid_level": "concat", 00:24:19.044 "superblock": true, 00:24:19.044 "num_base_bdevs": 4, 00:24:19.044 "num_base_bdevs_discovered": 4, 00:24:19.044 "num_base_bdevs_operational": 4, 00:24:19.044 "base_bdevs_list": [ 00:24:19.044 { 00:24:19.044 "name": "NewBaseBdev", 00:24:19.044 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:19.044 "is_configured": true, 00:24:19.044 "data_offset": 2048, 00:24:19.044 "data_size": 63488 00:24:19.044 }, 00:24:19.044 { 00:24:19.044 "name": "BaseBdev2", 00:24:19.044 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:19.044 "is_configured": true, 00:24:19.044 "data_offset": 2048, 00:24:19.044 "data_size": 63488 00:24:19.044 }, 00:24:19.044 { 00:24:19.044 "name": "BaseBdev3", 00:24:19.044 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:19.044 "is_configured": true, 00:24:19.044 "data_offset": 2048, 00:24:19.044 "data_size": 63488 00:24:19.044 }, 00:24:19.044 { 00:24:19.044 "name": "BaseBdev4", 00:24:19.044 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:19.044 "is_configured": true, 00:24:19.044 "data_offset": 2048, 00:24:19.044 "data_size": 63488 00:24:19.044 } 00:24:19.044 ] 00:24:19.044 }' 00:24:19.044 16:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:19.044 16:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:19.612 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:20.180 [2024-07-24 16:41:16.806812] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:20.180 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:20.180 "name": "Existed_Raid", 00:24:20.180 "aliases": [ 00:24:20.180 "271a73f6-afd6-48d7-be51-d547fe957ec1" 00:24:20.180 ], 00:24:20.180 "product_name": "Raid Volume", 00:24:20.180 "block_size": 512, 00:24:20.180 "num_blocks": 253952, 00:24:20.180 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:20.180 "assigned_rate_limits": { 00:24:20.180 "rw_ios_per_sec": 0, 00:24:20.180 "rw_mbytes_per_sec": 0, 00:24:20.180 "r_mbytes_per_sec": 0, 00:24:20.180 "w_mbytes_per_sec": 0 00:24:20.180 }, 00:24:20.180 "claimed": false, 00:24:20.180 "zoned": false, 00:24:20.180 "supported_io_types": { 00:24:20.180 "read": true, 00:24:20.180 "write": true, 00:24:20.180 "unmap": true, 00:24:20.180 "flush": true, 00:24:20.180 "reset": true, 00:24:20.180 "nvme_admin": false, 00:24:20.180 "nvme_io": false, 00:24:20.180 "nvme_io_md": false, 00:24:20.180 "write_zeroes": true, 00:24:20.180 "zcopy": false, 00:24:20.180 "get_zone_info": false, 00:24:20.180 "zone_management": false, 00:24:20.180 "zone_append": false, 00:24:20.180 "compare": false, 00:24:20.180 "compare_and_write": false, 00:24:20.180 "abort": false, 00:24:20.180 "seek_hole": false, 00:24:20.180 "seek_data": false, 00:24:20.180 "copy": false, 00:24:20.180 "nvme_iov_md": false 00:24:20.180 }, 00:24:20.180 "memory_domains": [ 00:24:20.180 { 00:24:20.180 "dma_device_id": "system", 00:24:20.180 "dma_device_type": 1 00:24:20.180 }, 00:24:20.180 { 00:24:20.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.180 "dma_device_type": 2 00:24:20.180 }, 00:24:20.180 { 00:24:20.180 "dma_device_id": "system", 00:24:20.180 "dma_device_type": 1 00:24:20.180 }, 00:24:20.180 { 00:24:20.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.180 "dma_device_type": 2 00:24:20.180 }, 00:24:20.180 { 00:24:20.180 "dma_device_id": "system", 00:24:20.180 "dma_device_type": 1 00:24:20.180 }, 00:24:20.180 { 00:24:20.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.181 "dma_device_type": 2 00:24:20.181 }, 00:24:20.181 { 00:24:20.181 "dma_device_id": "system", 00:24:20.181 "dma_device_type": 1 00:24:20.181 }, 00:24:20.181 { 00:24:20.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.181 "dma_device_type": 2 00:24:20.181 } 00:24:20.181 ], 00:24:20.181 "driver_specific": { 00:24:20.181 "raid": { 00:24:20.181 "uuid": "271a73f6-afd6-48d7-be51-d547fe957ec1", 00:24:20.181 "strip_size_kb": 64, 00:24:20.181 "state": "online", 00:24:20.181 "raid_level": "concat", 00:24:20.181 "superblock": true, 00:24:20.181 "num_base_bdevs": 4, 00:24:20.181 "num_base_bdevs_discovered": 4, 00:24:20.181 "num_base_bdevs_operational": 4, 00:24:20.181 "base_bdevs_list": [ 00:24:20.181 { 00:24:20.181 "name": "NewBaseBdev", 00:24:20.181 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:20.181 "is_configured": true, 00:24:20.181 "data_offset": 2048, 00:24:20.181 "data_size": 63488 00:24:20.181 }, 00:24:20.181 { 00:24:20.181 "name": "BaseBdev2", 00:24:20.181 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:20.181 "is_configured": true, 00:24:20.181 "data_offset": 2048, 00:24:20.181 "data_size": 63488 00:24:20.181 }, 00:24:20.181 { 00:24:20.181 "name": "BaseBdev3", 00:24:20.181 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:20.181 "is_configured": true, 00:24:20.181 "data_offset": 2048, 00:24:20.181 "data_size": 63488 00:24:20.181 }, 00:24:20.181 { 00:24:20.181 "name": "BaseBdev4", 00:24:20.181 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:20.181 "is_configured": true, 00:24:20.181 "data_offset": 2048, 00:24:20.181 "data_size": 63488 00:24:20.181 } 00:24:20.181 ] 00:24:20.181 } 00:24:20.181 } 00:24:20.181 }' 00:24:20.181 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:20.181 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:20.181 BaseBdev2 00:24:20.181 BaseBdev3 00:24:20.181 BaseBdev4' 00:24:20.181 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.181 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:20.181 16:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:20.440 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:20.440 "name": "NewBaseBdev", 00:24:20.440 "aliases": [ 00:24:20.440 "9097445f-9163-49f1-839d-b9f8baba190d" 00:24:20.440 ], 00:24:20.440 "product_name": "Malloc disk", 00:24:20.440 "block_size": 512, 00:24:20.440 "num_blocks": 65536, 00:24:20.440 "uuid": "9097445f-9163-49f1-839d-b9f8baba190d", 00:24:20.440 "assigned_rate_limits": { 00:24:20.440 "rw_ios_per_sec": 0, 00:24:20.440 "rw_mbytes_per_sec": 0, 00:24:20.440 "r_mbytes_per_sec": 0, 00:24:20.440 "w_mbytes_per_sec": 0 00:24:20.440 }, 00:24:20.440 "claimed": true, 00:24:20.440 "claim_type": "exclusive_write", 00:24:20.440 "zoned": false, 00:24:20.440 "supported_io_types": { 00:24:20.440 "read": true, 00:24:20.440 "write": true, 00:24:20.440 "unmap": true, 00:24:20.440 "flush": true, 00:24:20.440 "reset": true, 00:24:20.440 "nvme_admin": false, 00:24:20.440 "nvme_io": false, 00:24:20.440 "nvme_io_md": false, 00:24:20.440 "write_zeroes": true, 00:24:20.440 "zcopy": true, 00:24:20.440 "get_zone_info": false, 00:24:20.440 "zone_management": false, 00:24:20.440 "zone_append": false, 00:24:20.440 "compare": false, 00:24:20.440 "compare_and_write": false, 00:24:20.440 "abort": true, 00:24:20.440 "seek_hole": false, 00:24:20.440 "seek_data": false, 00:24:20.440 "copy": true, 00:24:20.440 "nvme_iov_md": false 00:24:20.440 }, 00:24:20.440 "memory_domains": [ 00:24:20.440 { 00:24:20.440 "dma_device_id": "system", 00:24:20.440 "dma_device_type": 1 00:24:20.440 }, 00:24:20.440 { 00:24:20.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.440 "dma_device_type": 2 00:24:20.440 } 00:24:20.440 ], 00:24:20.440 "driver_specific": {} 00:24:20.440 }' 00:24:20.440 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.440 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.440 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:20.440 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.440 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:20.699 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:20.959 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:20.959 "name": "BaseBdev2", 00:24:20.959 "aliases": [ 00:24:20.959 "859bb990-74fd-452c-bb9b-cd7a3416bb53" 00:24:20.959 ], 00:24:20.959 "product_name": "Malloc disk", 00:24:20.959 "block_size": 512, 00:24:20.959 "num_blocks": 65536, 00:24:20.959 "uuid": "859bb990-74fd-452c-bb9b-cd7a3416bb53", 00:24:20.959 "assigned_rate_limits": { 00:24:20.959 "rw_ios_per_sec": 0, 00:24:20.959 "rw_mbytes_per_sec": 0, 00:24:20.959 "r_mbytes_per_sec": 0, 00:24:20.959 "w_mbytes_per_sec": 0 00:24:20.959 }, 00:24:20.959 "claimed": true, 00:24:20.959 "claim_type": "exclusive_write", 00:24:20.959 "zoned": false, 00:24:20.959 "supported_io_types": { 00:24:20.959 "read": true, 00:24:20.959 "write": true, 00:24:20.959 "unmap": true, 00:24:20.959 "flush": true, 00:24:20.959 "reset": true, 00:24:20.959 "nvme_admin": false, 00:24:20.959 "nvme_io": false, 00:24:20.959 "nvme_io_md": false, 00:24:20.959 "write_zeroes": true, 00:24:20.959 "zcopy": true, 00:24:20.959 "get_zone_info": false, 00:24:20.959 "zone_management": false, 00:24:20.959 "zone_append": false, 00:24:20.959 "compare": false, 00:24:20.959 "compare_and_write": false, 00:24:20.959 "abort": true, 00:24:20.959 "seek_hole": false, 00:24:20.959 "seek_data": false, 00:24:20.959 "copy": true, 00:24:20.959 "nvme_iov_md": false 00:24:20.959 }, 00:24:20.959 "memory_domains": [ 00:24:20.959 { 00:24:20.959 "dma_device_id": "system", 00:24:20.959 "dma_device_type": 1 00:24:20.959 }, 00:24:20.959 { 00:24:20.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.959 "dma_device_type": 2 00:24:20.959 } 00:24:20.959 ], 00:24:20.959 "driver_specific": {} 00:24:20.959 }' 00:24:20.959 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.959 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.218 16:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.218 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:21.477 "name": "BaseBdev3", 00:24:21.477 "aliases": [ 00:24:21.477 "7fae90c7-d72e-431e-ba34-2df483e6e2a4" 00:24:21.477 ], 00:24:21.477 "product_name": "Malloc disk", 00:24:21.477 "block_size": 512, 00:24:21.477 "num_blocks": 65536, 00:24:21.477 "uuid": "7fae90c7-d72e-431e-ba34-2df483e6e2a4", 00:24:21.477 "assigned_rate_limits": { 00:24:21.477 "rw_ios_per_sec": 0, 00:24:21.477 "rw_mbytes_per_sec": 0, 00:24:21.477 "r_mbytes_per_sec": 0, 00:24:21.477 "w_mbytes_per_sec": 0 00:24:21.477 }, 00:24:21.477 "claimed": true, 00:24:21.477 "claim_type": "exclusive_write", 00:24:21.477 "zoned": false, 00:24:21.477 "supported_io_types": { 00:24:21.477 "read": true, 00:24:21.477 "write": true, 00:24:21.477 "unmap": true, 00:24:21.477 "flush": true, 00:24:21.477 "reset": true, 00:24:21.477 "nvme_admin": false, 00:24:21.477 "nvme_io": false, 00:24:21.477 "nvme_io_md": false, 00:24:21.477 "write_zeroes": true, 00:24:21.477 "zcopy": true, 00:24:21.477 "get_zone_info": false, 00:24:21.477 "zone_management": false, 00:24:21.477 "zone_append": false, 00:24:21.477 "compare": false, 00:24:21.477 "compare_and_write": false, 00:24:21.477 "abort": true, 00:24:21.477 "seek_hole": false, 00:24:21.477 "seek_data": false, 00:24:21.477 "copy": true, 00:24:21.477 "nvme_iov_md": false 00:24:21.477 }, 00:24:21.477 "memory_domains": [ 00:24:21.477 { 00:24:21.477 "dma_device_id": "system", 00:24:21.477 "dma_device_type": 1 00:24:21.477 }, 00:24:21.477 { 00:24:21.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.477 "dma_device_type": 2 00:24:21.477 } 00:24:21.477 ], 00:24:21.477 "driver_specific": {} 00:24:21.477 }' 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:21.477 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.736 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.736 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:21.736 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.736 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.736 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.737 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.737 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.737 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.737 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:21.737 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:21.737 16:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:22.304 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:22.304 "name": "BaseBdev4", 00:24:22.304 "aliases": [ 00:24:22.304 "33627187-2071-41f6-85c7-8d71f1f1c0c5" 00:24:22.304 ], 00:24:22.304 "product_name": "Malloc disk", 00:24:22.304 "block_size": 512, 00:24:22.304 "num_blocks": 65536, 00:24:22.304 "uuid": "33627187-2071-41f6-85c7-8d71f1f1c0c5", 00:24:22.304 "assigned_rate_limits": { 00:24:22.304 "rw_ios_per_sec": 0, 00:24:22.304 "rw_mbytes_per_sec": 0, 00:24:22.304 "r_mbytes_per_sec": 0, 00:24:22.304 "w_mbytes_per_sec": 0 00:24:22.304 }, 00:24:22.304 "claimed": true, 00:24:22.304 "claim_type": "exclusive_write", 00:24:22.304 "zoned": false, 00:24:22.304 "supported_io_types": { 00:24:22.304 "read": true, 00:24:22.304 "write": true, 00:24:22.304 "unmap": true, 00:24:22.304 "flush": true, 00:24:22.304 "reset": true, 00:24:22.304 "nvme_admin": false, 00:24:22.304 "nvme_io": false, 00:24:22.304 "nvme_io_md": false, 00:24:22.304 "write_zeroes": true, 00:24:22.304 "zcopy": true, 00:24:22.304 "get_zone_info": false, 00:24:22.304 "zone_management": false, 00:24:22.304 "zone_append": false, 00:24:22.304 "compare": false, 00:24:22.304 "compare_and_write": false, 00:24:22.304 "abort": true, 00:24:22.304 "seek_hole": false, 00:24:22.304 "seek_data": false, 00:24:22.304 "copy": true, 00:24:22.304 "nvme_iov_md": false 00:24:22.304 }, 00:24:22.304 "memory_domains": [ 00:24:22.304 { 00:24:22.304 "dma_device_id": "system", 00:24:22.304 "dma_device_type": 1 00:24:22.304 }, 00:24:22.304 { 00:24:22.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.304 "dma_device_type": 2 00:24:22.304 } 00:24:22.304 ], 00:24:22.304 "driver_specific": {} 00:24:22.304 }' 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:22.305 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:22.563 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:22.563 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:22.563 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:22.563 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:22.563 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:22.563 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:22.823 [2024-07-24 16:41:19.537805] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.823 [2024-07-24 16:41:19.537838] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:22.823 [2024-07-24 16:41:19.537920] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.823 [2024-07-24 16:41:19.538008] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:22.823 [2024-07-24 16:41:19.538026] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1704935 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1704935 ']' 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1704935 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1704935 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1704935' 00:24:22.823 killing process with pid 1704935 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1704935 00:24:22.823 [2024-07-24 16:41:19.616012] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:22.823 16:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1704935 00:24:23.391 [2024-07-24 16:41:20.080822] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.298 16:41:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:25.298 00:24:25.298 real 0m34.362s 00:24:25.298 user 1m0.326s 00:24:25.298 sys 0m5.596s 00:24:25.298 16:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.298 16:41:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.298 ************************************ 00:24:25.298 END TEST raid_state_function_test_sb 00:24:25.298 ************************************ 00:24:25.298 16:41:21 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:25.298 16:41:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:25.298 16:41:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.298 16:41:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:25.298 ************************************ 00:24:25.298 START TEST raid_superblock_test 00:24:25.298 ************************************ 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1711382 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1711382 /var/tmp/spdk-raid.sock 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1711382 ']' 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:25.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.298 16:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.298 [2024-07-24 16:41:21.993585] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:24:25.298 [2024-07-24 16:41:21.993699] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1711382 ] 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.298 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:25.298 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:25.299 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:25.299 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:25.558 [2024-07-24 16:41:22.218882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.818 [2024-07-24 16:41:22.483146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.077 [2024-07-24 16:41:22.809084] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.077 [2024-07-24 16:41:22.809126] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:26.336 16:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:26.595 malloc1 00:24:26.595 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:26.854 [2024-07-24 16:41:23.487051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:26.854 [2024-07-24 16:41:23.487114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:26.854 [2024-07-24 16:41:23.487153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:24:26.854 [2024-07-24 16:41:23.487170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:26.854 [2024-07-24 16:41:23.489901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:26.854 [2024-07-24 16:41:23.489939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:26.854 pt1 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:26.854 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:27.113 malloc2 00:24:27.113 16:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:27.372 [2024-07-24 16:41:23.996233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:27.372 [2024-07-24 16:41:23.996293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.372 [2024-07-24 16:41:23.996320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:24:27.372 [2024-07-24 16:41:23.996336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.372 [2024-07-24 16:41:23.999045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.372 [2024-07-24 16:41:23.999085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:27.372 pt2 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:27.372 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:27.631 malloc3 00:24:27.631 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:27.631 [2024-07-24 16:41:24.479565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:27.631 [2024-07-24 16:41:24.479618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:27.631 [2024-07-24 16:41:24.479646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:24:27.631 [2024-07-24 16:41:24.479662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:27.631 [2024-07-24 16:41:24.482325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:27.631 [2024-07-24 16:41:24.482357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:27.631 pt3 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:27.890 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:28.149 malloc4 00:24:28.149 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:28.149 [2024-07-24 16:41:24.979767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:28.149 [2024-07-24 16:41:24.979835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:28.149 [2024-07-24 16:41:24.979863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:24:28.149 [2024-07-24 16:41:24.979879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:28.149 [2024-07-24 16:41:24.982632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:28.149 [2024-07-24 16:41:24.982666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:28.149 pt4 00:24:28.149 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:28.149 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:28.149 16:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:28.408 [2024-07-24 16:41:25.200434] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:28.408 [2024-07-24 16:41:25.202771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:28.408 [2024-07-24 16:41:25.202855] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:28.408 [2024-07-24 16:41:25.202913] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:28.408 [2024-07-24 16:41:25.203172] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:24:28.408 [2024-07-24 16:41:25.203193] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:28.408 [2024-07-24 16:41:25.203540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:28.408 [2024-07-24 16:41:25.203789] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:24:28.408 [2024-07-24 16:41:25.203807] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:24:28.408 [2024-07-24 16:41:25.203985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.408 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.667 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.667 "name": "raid_bdev1", 00:24:28.667 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:28.667 "strip_size_kb": 64, 00:24:28.667 "state": "online", 00:24:28.667 "raid_level": "concat", 00:24:28.667 "superblock": true, 00:24:28.667 "num_base_bdevs": 4, 00:24:28.667 "num_base_bdevs_discovered": 4, 00:24:28.667 "num_base_bdevs_operational": 4, 00:24:28.667 "base_bdevs_list": [ 00:24:28.667 { 00:24:28.667 "name": "pt1", 00:24:28.667 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:28.667 "is_configured": true, 00:24:28.667 "data_offset": 2048, 00:24:28.667 "data_size": 63488 00:24:28.667 }, 00:24:28.667 { 00:24:28.667 "name": "pt2", 00:24:28.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:28.667 "is_configured": true, 00:24:28.667 "data_offset": 2048, 00:24:28.667 "data_size": 63488 00:24:28.667 }, 00:24:28.667 { 00:24:28.667 "name": "pt3", 00:24:28.667 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:28.667 "is_configured": true, 00:24:28.667 "data_offset": 2048, 00:24:28.667 "data_size": 63488 00:24:28.667 }, 00:24:28.667 { 00:24:28.667 "name": "pt4", 00:24:28.667 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:28.667 "is_configured": true, 00:24:28.667 "data_offset": 2048, 00:24:28.667 "data_size": 63488 00:24:28.667 } 00:24:28.667 ] 00:24:28.667 }' 00:24:28.667 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.667 16:41:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:29.235 16:41:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:29.494 [2024-07-24 16:41:26.171466] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:29.494 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:29.494 "name": "raid_bdev1", 00:24:29.494 "aliases": [ 00:24:29.494 "5972448b-7b78-4994-8dae-40def48016df" 00:24:29.494 ], 00:24:29.494 "product_name": "Raid Volume", 00:24:29.494 "block_size": 512, 00:24:29.494 "num_blocks": 253952, 00:24:29.494 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:29.494 "assigned_rate_limits": { 00:24:29.494 "rw_ios_per_sec": 0, 00:24:29.494 "rw_mbytes_per_sec": 0, 00:24:29.494 "r_mbytes_per_sec": 0, 00:24:29.494 "w_mbytes_per_sec": 0 00:24:29.494 }, 00:24:29.494 "claimed": false, 00:24:29.494 "zoned": false, 00:24:29.494 "supported_io_types": { 00:24:29.494 "read": true, 00:24:29.494 "write": true, 00:24:29.494 "unmap": true, 00:24:29.494 "flush": true, 00:24:29.494 "reset": true, 00:24:29.494 "nvme_admin": false, 00:24:29.494 "nvme_io": false, 00:24:29.494 "nvme_io_md": false, 00:24:29.494 "write_zeroes": true, 00:24:29.494 "zcopy": false, 00:24:29.494 "get_zone_info": false, 00:24:29.494 "zone_management": false, 00:24:29.494 "zone_append": false, 00:24:29.494 "compare": false, 00:24:29.494 "compare_and_write": false, 00:24:29.494 "abort": false, 00:24:29.494 "seek_hole": false, 00:24:29.494 "seek_data": false, 00:24:29.494 "copy": false, 00:24:29.494 "nvme_iov_md": false 00:24:29.494 }, 00:24:29.494 "memory_domains": [ 00:24:29.494 { 00:24:29.494 "dma_device_id": "system", 00:24:29.494 "dma_device_type": 1 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.494 "dma_device_type": 2 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "system", 00:24:29.494 "dma_device_type": 1 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.494 "dma_device_type": 2 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "system", 00:24:29.494 "dma_device_type": 1 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.494 "dma_device_type": 2 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "system", 00:24:29.494 "dma_device_type": 1 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.494 "dma_device_type": 2 00:24:29.494 } 00:24:29.494 ], 00:24:29.494 "driver_specific": { 00:24:29.494 "raid": { 00:24:29.494 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:29.494 "strip_size_kb": 64, 00:24:29.494 "state": "online", 00:24:29.494 "raid_level": "concat", 00:24:29.494 "superblock": true, 00:24:29.494 "num_base_bdevs": 4, 00:24:29.494 "num_base_bdevs_discovered": 4, 00:24:29.494 "num_base_bdevs_operational": 4, 00:24:29.494 "base_bdevs_list": [ 00:24:29.494 { 00:24:29.494 "name": "pt1", 00:24:29.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:29.494 "is_configured": true, 00:24:29.494 "data_offset": 2048, 00:24:29.494 "data_size": 63488 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "name": "pt2", 00:24:29.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:29.494 "is_configured": true, 00:24:29.494 "data_offset": 2048, 00:24:29.494 "data_size": 63488 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "name": "pt3", 00:24:29.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:29.494 "is_configured": true, 00:24:29.494 "data_offset": 2048, 00:24:29.494 "data_size": 63488 00:24:29.494 }, 00:24:29.494 { 00:24:29.494 "name": "pt4", 00:24:29.494 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:29.494 "is_configured": true, 00:24:29.494 "data_offset": 2048, 00:24:29.494 "data_size": 63488 00:24:29.494 } 00:24:29.494 ] 00:24:29.494 } 00:24:29.494 } 00:24:29.494 }' 00:24:29.494 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:29.494 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:29.494 pt2 00:24:29.494 pt3 00:24:29.494 pt4' 00:24:29.494 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:29.494 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:29.494 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:29.753 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:29.754 "name": "pt1", 00:24:29.754 "aliases": [ 00:24:29.754 "00000000-0000-0000-0000-000000000001" 00:24:29.754 ], 00:24:29.754 "product_name": "passthru", 00:24:29.754 "block_size": 512, 00:24:29.754 "num_blocks": 65536, 00:24:29.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:29.754 "assigned_rate_limits": { 00:24:29.754 "rw_ios_per_sec": 0, 00:24:29.754 "rw_mbytes_per_sec": 0, 00:24:29.754 "r_mbytes_per_sec": 0, 00:24:29.754 "w_mbytes_per_sec": 0 00:24:29.754 }, 00:24:29.754 "claimed": true, 00:24:29.754 "claim_type": "exclusive_write", 00:24:29.754 "zoned": false, 00:24:29.754 "supported_io_types": { 00:24:29.754 "read": true, 00:24:29.754 "write": true, 00:24:29.754 "unmap": true, 00:24:29.754 "flush": true, 00:24:29.754 "reset": true, 00:24:29.754 "nvme_admin": false, 00:24:29.754 "nvme_io": false, 00:24:29.754 "nvme_io_md": false, 00:24:29.754 "write_zeroes": true, 00:24:29.754 "zcopy": true, 00:24:29.754 "get_zone_info": false, 00:24:29.754 "zone_management": false, 00:24:29.754 "zone_append": false, 00:24:29.754 "compare": false, 00:24:29.754 "compare_and_write": false, 00:24:29.754 "abort": true, 00:24:29.754 "seek_hole": false, 00:24:29.754 "seek_data": false, 00:24:29.754 "copy": true, 00:24:29.754 "nvme_iov_md": false 00:24:29.754 }, 00:24:29.754 "memory_domains": [ 00:24:29.754 { 00:24:29.754 "dma_device_id": "system", 00:24:29.754 "dma_device_type": 1 00:24:29.754 }, 00:24:29.754 { 00:24:29.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.754 "dma_device_type": 2 00:24:29.754 } 00:24:29.754 ], 00:24:29.754 "driver_specific": { 00:24:29.754 "passthru": { 00:24:29.754 "name": "pt1", 00:24:29.754 "base_bdev_name": "malloc1" 00:24:29.754 } 00:24:29.754 } 00:24:29.754 }' 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:29.754 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:30.012 16:41:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.271 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.271 "name": "pt2", 00:24:30.271 "aliases": [ 00:24:30.271 "00000000-0000-0000-0000-000000000002" 00:24:30.271 ], 00:24:30.271 "product_name": "passthru", 00:24:30.271 "block_size": 512, 00:24:30.271 "num_blocks": 65536, 00:24:30.271 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:30.271 "assigned_rate_limits": { 00:24:30.271 "rw_ios_per_sec": 0, 00:24:30.271 "rw_mbytes_per_sec": 0, 00:24:30.271 "r_mbytes_per_sec": 0, 00:24:30.271 "w_mbytes_per_sec": 0 00:24:30.271 }, 00:24:30.271 "claimed": true, 00:24:30.271 "claim_type": "exclusive_write", 00:24:30.271 "zoned": false, 00:24:30.271 "supported_io_types": { 00:24:30.271 "read": true, 00:24:30.271 "write": true, 00:24:30.271 "unmap": true, 00:24:30.271 "flush": true, 00:24:30.271 "reset": true, 00:24:30.271 "nvme_admin": false, 00:24:30.271 "nvme_io": false, 00:24:30.271 "nvme_io_md": false, 00:24:30.271 "write_zeroes": true, 00:24:30.271 "zcopy": true, 00:24:30.271 "get_zone_info": false, 00:24:30.271 "zone_management": false, 00:24:30.271 "zone_append": false, 00:24:30.271 "compare": false, 00:24:30.271 "compare_and_write": false, 00:24:30.271 "abort": true, 00:24:30.271 "seek_hole": false, 00:24:30.271 "seek_data": false, 00:24:30.271 "copy": true, 00:24:30.271 "nvme_iov_md": false 00:24:30.271 }, 00:24:30.271 "memory_domains": [ 00:24:30.271 { 00:24:30.271 "dma_device_id": "system", 00:24:30.271 "dma_device_type": 1 00:24:30.271 }, 00:24:30.271 { 00:24:30.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.271 "dma_device_type": 2 00:24:30.271 } 00:24:30.271 ], 00:24:30.271 "driver_specific": { 00:24:30.271 "passthru": { 00:24:30.271 "name": "pt2", 00:24:30.271 "base_bdev_name": "malloc2" 00:24:30.271 } 00:24:30.271 } 00:24:30.271 }' 00:24:30.271 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.271 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.271 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:30.271 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:30.532 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:30.825 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:30.825 "name": "pt3", 00:24:30.825 "aliases": [ 00:24:30.825 "00000000-0000-0000-0000-000000000003" 00:24:30.825 ], 00:24:30.825 "product_name": "passthru", 00:24:30.825 "block_size": 512, 00:24:30.825 "num_blocks": 65536, 00:24:30.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:30.825 "assigned_rate_limits": { 00:24:30.825 "rw_ios_per_sec": 0, 00:24:30.825 "rw_mbytes_per_sec": 0, 00:24:30.825 "r_mbytes_per_sec": 0, 00:24:30.825 "w_mbytes_per_sec": 0 00:24:30.825 }, 00:24:30.825 "claimed": true, 00:24:30.825 "claim_type": "exclusive_write", 00:24:30.825 "zoned": false, 00:24:30.825 "supported_io_types": { 00:24:30.825 "read": true, 00:24:30.825 "write": true, 00:24:30.825 "unmap": true, 00:24:30.825 "flush": true, 00:24:30.825 "reset": true, 00:24:30.825 "nvme_admin": false, 00:24:30.825 "nvme_io": false, 00:24:30.825 "nvme_io_md": false, 00:24:30.825 "write_zeroes": true, 00:24:30.825 "zcopy": true, 00:24:30.825 "get_zone_info": false, 00:24:30.825 "zone_management": false, 00:24:30.825 "zone_append": false, 00:24:30.825 "compare": false, 00:24:30.825 "compare_and_write": false, 00:24:30.825 "abort": true, 00:24:30.825 "seek_hole": false, 00:24:30.825 "seek_data": false, 00:24:30.825 "copy": true, 00:24:30.825 "nvme_iov_md": false 00:24:30.825 }, 00:24:30.825 "memory_domains": [ 00:24:30.825 { 00:24:30.825 "dma_device_id": "system", 00:24:30.825 "dma_device_type": 1 00:24:30.825 }, 00:24:30.825 { 00:24:30.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.825 "dma_device_type": 2 00:24:30.825 } 00:24:30.825 ], 00:24:30.825 "driver_specific": { 00:24:30.825 "passthru": { 00:24:30.825 "name": "pt3", 00:24:30.825 "base_bdev_name": "malloc3" 00:24:30.825 } 00:24:30.825 } 00:24:30.825 }' 00:24:30.825 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:30.825 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.085 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.343 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.343 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:31.343 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:31.343 16:41:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:31.343 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:31.343 "name": "pt4", 00:24:31.343 "aliases": [ 00:24:31.343 "00000000-0000-0000-0000-000000000004" 00:24:31.343 ], 00:24:31.343 "product_name": "passthru", 00:24:31.343 "block_size": 512, 00:24:31.343 "num_blocks": 65536, 00:24:31.343 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:31.343 "assigned_rate_limits": { 00:24:31.343 "rw_ios_per_sec": 0, 00:24:31.343 "rw_mbytes_per_sec": 0, 00:24:31.343 "r_mbytes_per_sec": 0, 00:24:31.343 "w_mbytes_per_sec": 0 00:24:31.343 }, 00:24:31.343 "claimed": true, 00:24:31.343 "claim_type": "exclusive_write", 00:24:31.343 "zoned": false, 00:24:31.343 "supported_io_types": { 00:24:31.343 "read": true, 00:24:31.343 "write": true, 00:24:31.343 "unmap": true, 00:24:31.343 "flush": true, 00:24:31.343 "reset": true, 00:24:31.343 "nvme_admin": false, 00:24:31.343 "nvme_io": false, 00:24:31.343 "nvme_io_md": false, 00:24:31.343 "write_zeroes": true, 00:24:31.343 "zcopy": true, 00:24:31.343 "get_zone_info": false, 00:24:31.343 "zone_management": false, 00:24:31.343 "zone_append": false, 00:24:31.343 "compare": false, 00:24:31.343 "compare_and_write": false, 00:24:31.343 "abort": true, 00:24:31.343 "seek_hole": false, 00:24:31.343 "seek_data": false, 00:24:31.343 "copy": true, 00:24:31.343 "nvme_iov_md": false 00:24:31.343 }, 00:24:31.343 "memory_domains": [ 00:24:31.343 { 00:24:31.343 "dma_device_id": "system", 00:24:31.343 "dma_device_type": 1 00:24:31.343 }, 00:24:31.343 { 00:24:31.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.343 "dma_device_type": 2 00:24:31.343 } 00:24:31.343 ], 00:24:31.343 "driver_specific": { 00:24:31.343 "passthru": { 00:24:31.343 "name": "pt4", 00:24:31.343 "base_bdev_name": "malloc4" 00:24:31.343 } 00:24:31.343 } 00:24:31.343 }' 00:24:31.343 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:31.602 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:24:31.862 [2024-07-24 16:41:28.666193] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5972448b-7b78-4994-8dae-40def48016df 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5972448b-7b78-4994-8dae-40def48016df ']' 00:24:31.862 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:32.121 [2024-07-24 16:41:28.886378] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.121 [2024-07-24 16:41:28.886413] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.121 [2024-07-24 16:41:28.886513] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.121 [2024-07-24 16:41:28.886603] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.121 [2024-07-24 16:41:28.886623] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:24:32.121 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.121 16:41:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:24:32.381 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:24:32.381 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:24:32.381 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:32.381 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:32.640 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:32.640 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:33.209 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.209 16:41:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:33.469 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:33.469 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:33.469 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:33.469 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:33.728 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:33.729 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:33.729 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:33.988 [2024-07-24 16:41:30.727263] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:33.988 [2024-07-24 16:41:30.729602] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:33.988 [2024-07-24 16:41:30.729663] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:33.988 [2024-07-24 16:41:30.729709] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:33.988 [2024-07-24 16:41:30.729767] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:33.988 [2024-07-24 16:41:30.729825] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:33.988 [2024-07-24 16:41:30.729854] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:33.988 [2024-07-24 16:41:30.729885] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:33.988 [2024-07-24 16:41:30.729907] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.988 [2024-07-24 16:41:30.729925] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:24:33.988 request: 00:24:33.988 { 00:24:33.988 "name": "raid_bdev1", 00:24:33.988 "raid_level": "concat", 00:24:33.988 "base_bdevs": [ 00:24:33.988 "malloc1", 00:24:33.988 "malloc2", 00:24:33.988 "malloc3", 00:24:33.988 "malloc4" 00:24:33.988 ], 00:24:33.988 "strip_size_kb": 64, 00:24:33.988 "superblock": false, 00:24:33.988 "method": "bdev_raid_create", 00:24:33.988 "req_id": 1 00:24:33.988 } 00:24:33.988 Got JSON-RPC error response 00:24:33.988 response: 00:24:33.988 { 00:24:33.988 "code": -17, 00:24:33.988 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:33.988 } 00:24:33.988 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:24:33.988 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:33.988 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:33.988 16:41:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:33.988 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.988 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:24:34.247 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:24:34.247 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:24:34.247 16:41:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:34.816 [2024-07-24 16:41:31.449123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:34.816 [2024-07-24 16:41:31.449211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.816 [2024-07-24 16:41:31.449237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:24:34.816 [2024-07-24 16:41:31.449256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.816 [2024-07-24 16:41:31.452101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.816 [2024-07-24 16:41:31.452151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:34.816 [2024-07-24 16:41:31.452264] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:34.816 [2024-07-24 16:41:31.452341] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:34.816 pt1 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:34.816 "name": "raid_bdev1", 00:24:34.816 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:34.816 "strip_size_kb": 64, 00:24:34.816 "state": "configuring", 00:24:34.816 "raid_level": "concat", 00:24:34.816 "superblock": true, 00:24:34.816 "num_base_bdevs": 4, 00:24:34.816 "num_base_bdevs_discovered": 1, 00:24:34.816 "num_base_bdevs_operational": 4, 00:24:34.816 "base_bdevs_list": [ 00:24:34.816 { 00:24:34.816 "name": "pt1", 00:24:34.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:34.816 "is_configured": true, 00:24:34.816 "data_offset": 2048, 00:24:34.816 "data_size": 63488 00:24:34.816 }, 00:24:34.816 { 00:24:34.816 "name": null, 00:24:34.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:34.816 "is_configured": false, 00:24:34.816 "data_offset": 2048, 00:24:34.816 "data_size": 63488 00:24:34.816 }, 00:24:34.816 { 00:24:34.816 "name": null, 00:24:34.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:34.816 "is_configured": false, 00:24:34.816 "data_offset": 2048, 00:24:34.816 "data_size": 63488 00:24:34.816 }, 00:24:34.816 { 00:24:34.816 "name": null, 00:24:34.816 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:34.816 "is_configured": false, 00:24:34.816 "data_offset": 2048, 00:24:34.816 "data_size": 63488 00:24:34.816 } 00:24:34.816 ] 00:24:34.816 }' 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:34.816 16:41:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.753 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:24:35.753 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:36.012 [2024-07-24 16:41:32.636336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:36.012 [2024-07-24 16:41:32.636407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.012 [2024-07-24 16:41:32.636434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:24:36.012 [2024-07-24 16:41:32.636453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.012 [2024-07-24 16:41:32.637033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.012 [2024-07-24 16:41:32.637062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:36.012 [2024-07-24 16:41:32.637167] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:36.012 [2024-07-24 16:41:32.637203] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:36.012 pt2 00:24:36.012 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:36.012 [2024-07-24 16:41:32.864978] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.272 16:41:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:36.272 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.272 "name": "raid_bdev1", 00:24:36.272 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:36.272 "strip_size_kb": 64, 00:24:36.272 "state": "configuring", 00:24:36.272 "raid_level": "concat", 00:24:36.272 "superblock": true, 00:24:36.272 "num_base_bdevs": 4, 00:24:36.272 "num_base_bdevs_discovered": 1, 00:24:36.272 "num_base_bdevs_operational": 4, 00:24:36.272 "base_bdevs_list": [ 00:24:36.272 { 00:24:36.272 "name": "pt1", 00:24:36.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:36.272 "is_configured": true, 00:24:36.272 "data_offset": 2048, 00:24:36.272 "data_size": 63488 00:24:36.272 }, 00:24:36.272 { 00:24:36.272 "name": null, 00:24:36.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:36.272 "is_configured": false, 00:24:36.272 "data_offset": 2048, 00:24:36.272 "data_size": 63488 00:24:36.272 }, 00:24:36.272 { 00:24:36.272 "name": null, 00:24:36.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:36.272 "is_configured": false, 00:24:36.272 "data_offset": 2048, 00:24:36.272 "data_size": 63488 00:24:36.272 }, 00:24:36.272 { 00:24:36.272 "name": null, 00:24:36.272 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:36.272 "is_configured": false, 00:24:36.272 "data_offset": 2048, 00:24:36.272 "data_size": 63488 00:24:36.272 } 00:24:36.272 ] 00:24:36.272 }' 00:24:36.272 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.272 16:41:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:37.210 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:24:37.210 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:37.210 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:37.210 [2024-07-24 16:41:33.915757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:37.210 [2024-07-24 16:41:33.915825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.210 [2024-07-24 16:41:33.915853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:24:37.210 [2024-07-24 16:41:33.915868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.210 [2024-07-24 16:41:33.916457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.210 [2024-07-24 16:41:33.916483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:37.210 [2024-07-24 16:41:33.916583] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:37.210 [2024-07-24 16:41:33.916611] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:37.210 pt2 00:24:37.210 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:37.210 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:37.210 16:41:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:37.469 [2024-07-24 16:41:34.144397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:37.469 [2024-07-24 16:41:34.144451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.469 [2024-07-24 16:41:34.144485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:24:37.469 [2024-07-24 16:41:34.144500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.469 [2024-07-24 16:41:34.145085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.469 [2024-07-24 16:41:34.145110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:37.469 [2024-07-24 16:41:34.145213] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:37.469 [2024-07-24 16:41:34.145241] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:37.469 pt3 00:24:37.469 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:37.469 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:37.469 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:37.728 [2024-07-24 16:41:34.373062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:37.728 [2024-07-24 16:41:34.373129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.728 [2024-07-24 16:41:34.373167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:24:37.728 [2024-07-24 16:41:34.373182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.728 [2024-07-24 16:41:34.373793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.728 [2024-07-24 16:41:34.373820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:37.728 [2024-07-24 16:41:34.373922] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:37.728 [2024-07-24 16:41:34.373951] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:37.728 [2024-07-24 16:41:34.374174] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:24:37.728 [2024-07-24 16:41:34.374190] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:37.728 [2024-07-24 16:41:34.374503] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:24:37.728 [2024-07-24 16:41:34.374713] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:24:37.728 [2024-07-24 16:41:34.374733] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:24:37.728 [2024-07-24 16:41:34.374921] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.728 pt4 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.728 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.987 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.987 "name": "raid_bdev1", 00:24:37.987 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:37.987 "strip_size_kb": 64, 00:24:37.987 "state": "online", 00:24:37.987 "raid_level": "concat", 00:24:37.987 "superblock": true, 00:24:37.987 "num_base_bdevs": 4, 00:24:37.987 "num_base_bdevs_discovered": 4, 00:24:37.987 "num_base_bdevs_operational": 4, 00:24:37.987 "base_bdevs_list": [ 00:24:37.987 { 00:24:37.987 "name": "pt1", 00:24:37.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:37.987 "is_configured": true, 00:24:37.987 "data_offset": 2048, 00:24:37.987 "data_size": 63488 00:24:37.987 }, 00:24:37.987 { 00:24:37.987 "name": "pt2", 00:24:37.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:37.988 "is_configured": true, 00:24:37.988 "data_offset": 2048, 00:24:37.988 "data_size": 63488 00:24:37.988 }, 00:24:37.988 { 00:24:37.988 "name": "pt3", 00:24:37.988 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:37.988 "is_configured": true, 00:24:37.988 "data_offset": 2048, 00:24:37.988 "data_size": 63488 00:24:37.988 }, 00:24:37.988 { 00:24:37.988 "name": "pt4", 00:24:37.988 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:37.988 "is_configured": true, 00:24:37.988 "data_offset": 2048, 00:24:37.988 "data_size": 63488 00:24:37.988 } 00:24:37.988 ] 00:24:37.988 }' 00:24:37.988 16:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.988 16:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:38.556 [2024-07-24 16:41:35.388199] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.556 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:38.556 "name": "raid_bdev1", 00:24:38.556 "aliases": [ 00:24:38.556 "5972448b-7b78-4994-8dae-40def48016df" 00:24:38.556 ], 00:24:38.556 "product_name": "Raid Volume", 00:24:38.556 "block_size": 512, 00:24:38.556 "num_blocks": 253952, 00:24:38.556 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:38.556 "assigned_rate_limits": { 00:24:38.556 "rw_ios_per_sec": 0, 00:24:38.556 "rw_mbytes_per_sec": 0, 00:24:38.556 "r_mbytes_per_sec": 0, 00:24:38.556 "w_mbytes_per_sec": 0 00:24:38.556 }, 00:24:38.556 "claimed": false, 00:24:38.556 "zoned": false, 00:24:38.556 "supported_io_types": { 00:24:38.556 "read": true, 00:24:38.556 "write": true, 00:24:38.556 "unmap": true, 00:24:38.556 "flush": true, 00:24:38.556 "reset": true, 00:24:38.556 "nvme_admin": false, 00:24:38.556 "nvme_io": false, 00:24:38.556 "nvme_io_md": false, 00:24:38.556 "write_zeroes": true, 00:24:38.556 "zcopy": false, 00:24:38.556 "get_zone_info": false, 00:24:38.556 "zone_management": false, 00:24:38.556 "zone_append": false, 00:24:38.556 "compare": false, 00:24:38.556 "compare_and_write": false, 00:24:38.556 "abort": false, 00:24:38.556 "seek_hole": false, 00:24:38.556 "seek_data": false, 00:24:38.556 "copy": false, 00:24:38.556 "nvme_iov_md": false 00:24:38.556 }, 00:24:38.556 "memory_domains": [ 00:24:38.556 { 00:24:38.556 "dma_device_id": "system", 00:24:38.556 "dma_device_type": 1 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.556 "dma_device_type": 2 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "system", 00:24:38.556 "dma_device_type": 1 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.556 "dma_device_type": 2 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "system", 00:24:38.556 "dma_device_type": 1 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.556 "dma_device_type": 2 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "system", 00:24:38.556 "dma_device_type": 1 00:24:38.556 }, 00:24:38.556 { 00:24:38.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.556 "dma_device_type": 2 00:24:38.556 } 00:24:38.556 ], 00:24:38.556 "driver_specific": { 00:24:38.556 "raid": { 00:24:38.556 "uuid": "5972448b-7b78-4994-8dae-40def48016df", 00:24:38.556 "strip_size_kb": 64, 00:24:38.556 "state": "online", 00:24:38.556 "raid_level": "concat", 00:24:38.556 "superblock": true, 00:24:38.556 "num_base_bdevs": 4, 00:24:38.556 "num_base_bdevs_discovered": 4, 00:24:38.556 "num_base_bdevs_operational": 4, 00:24:38.556 "base_bdevs_list": [ 00:24:38.556 { 00:24:38.556 "name": "pt1", 00:24:38.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.556 "is_configured": true, 00:24:38.556 "data_offset": 2048, 00:24:38.556 "data_size": 63488 00:24:38.557 }, 00:24:38.557 { 00:24:38.557 "name": "pt2", 00:24:38.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:38.557 "is_configured": true, 00:24:38.557 "data_offset": 2048, 00:24:38.557 "data_size": 63488 00:24:38.557 }, 00:24:38.557 { 00:24:38.557 "name": "pt3", 00:24:38.557 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:38.557 "is_configured": true, 00:24:38.557 "data_offset": 2048, 00:24:38.557 "data_size": 63488 00:24:38.557 }, 00:24:38.557 { 00:24:38.557 "name": "pt4", 00:24:38.557 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:38.557 "is_configured": true, 00:24:38.557 "data_offset": 2048, 00:24:38.557 "data_size": 63488 00:24:38.557 } 00:24:38.557 ] 00:24:38.557 } 00:24:38.557 } 00:24:38.557 }' 00:24:38.557 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:38.816 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:38.816 pt2 00:24:38.816 pt3 00:24:38.816 pt4' 00:24:38.816 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.816 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:38.816 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:38.816 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:38.816 "name": "pt1", 00:24:38.816 "aliases": [ 00:24:38.816 "00000000-0000-0000-0000-000000000001" 00:24:38.816 ], 00:24:38.816 "product_name": "passthru", 00:24:38.816 "block_size": 512, 00:24:38.816 "num_blocks": 65536, 00:24:38.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:38.816 "assigned_rate_limits": { 00:24:38.816 "rw_ios_per_sec": 0, 00:24:38.816 "rw_mbytes_per_sec": 0, 00:24:38.816 "r_mbytes_per_sec": 0, 00:24:38.816 "w_mbytes_per_sec": 0 00:24:38.816 }, 00:24:38.816 "claimed": true, 00:24:38.816 "claim_type": "exclusive_write", 00:24:38.816 "zoned": false, 00:24:38.816 "supported_io_types": { 00:24:38.816 "read": true, 00:24:38.816 "write": true, 00:24:38.816 "unmap": true, 00:24:38.816 "flush": true, 00:24:38.816 "reset": true, 00:24:38.816 "nvme_admin": false, 00:24:38.816 "nvme_io": false, 00:24:38.816 "nvme_io_md": false, 00:24:38.816 "write_zeroes": true, 00:24:38.816 "zcopy": true, 00:24:38.816 "get_zone_info": false, 00:24:38.816 "zone_management": false, 00:24:38.816 "zone_append": false, 00:24:38.816 "compare": false, 00:24:38.816 "compare_and_write": false, 00:24:38.816 "abort": true, 00:24:38.816 "seek_hole": false, 00:24:38.816 "seek_data": false, 00:24:38.816 "copy": true, 00:24:38.816 "nvme_iov_md": false 00:24:38.816 }, 00:24:38.816 "memory_domains": [ 00:24:38.816 { 00:24:38.816 "dma_device_id": "system", 00:24:38.816 "dma_device_type": 1 00:24:38.816 }, 00:24:38.816 { 00:24:38.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.816 "dma_device_type": 2 00:24:38.816 } 00:24:38.816 ], 00:24:38.816 "driver_specific": { 00:24:38.816 "passthru": { 00:24:38.816 "name": "pt1", 00:24:38.816 "base_bdev_name": "malloc1" 00:24:38.816 } 00:24:38.816 } 00:24:38.816 }' 00:24:38.816 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.075 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.335 16:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.335 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.335 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.335 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:39.335 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.594 "name": "pt2", 00:24:39.594 "aliases": [ 00:24:39.594 "00000000-0000-0000-0000-000000000002" 00:24:39.594 ], 00:24:39.594 "product_name": "passthru", 00:24:39.594 "block_size": 512, 00:24:39.594 "num_blocks": 65536, 00:24:39.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:39.594 "assigned_rate_limits": { 00:24:39.594 "rw_ios_per_sec": 0, 00:24:39.594 "rw_mbytes_per_sec": 0, 00:24:39.594 "r_mbytes_per_sec": 0, 00:24:39.594 "w_mbytes_per_sec": 0 00:24:39.594 }, 00:24:39.594 "claimed": true, 00:24:39.594 "claim_type": "exclusive_write", 00:24:39.594 "zoned": false, 00:24:39.594 "supported_io_types": { 00:24:39.594 "read": true, 00:24:39.594 "write": true, 00:24:39.594 "unmap": true, 00:24:39.594 "flush": true, 00:24:39.594 "reset": true, 00:24:39.594 "nvme_admin": false, 00:24:39.594 "nvme_io": false, 00:24:39.594 "nvme_io_md": false, 00:24:39.594 "write_zeroes": true, 00:24:39.594 "zcopy": true, 00:24:39.594 "get_zone_info": false, 00:24:39.594 "zone_management": false, 00:24:39.594 "zone_append": false, 00:24:39.594 "compare": false, 00:24:39.594 "compare_and_write": false, 00:24:39.594 "abort": true, 00:24:39.594 "seek_hole": false, 00:24:39.594 "seek_data": false, 00:24:39.594 "copy": true, 00:24:39.594 "nvme_iov_md": false 00:24:39.594 }, 00:24:39.594 "memory_domains": [ 00:24:39.594 { 00:24:39.594 "dma_device_id": "system", 00:24:39.594 "dma_device_type": 1 00:24:39.594 }, 00:24:39.594 { 00:24:39.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.594 "dma_device_type": 2 00:24:39.594 } 00:24:39.594 ], 00:24:39.594 "driver_specific": { 00:24:39.594 "passthru": { 00:24:39.594 "name": "pt2", 00:24:39.594 "base_bdev_name": "malloc2" 00:24:39.594 } 00:24:39.594 } 00:24:39.594 }' 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.594 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.854 "name": "pt3", 00:24:39.854 "aliases": [ 00:24:39.854 "00000000-0000-0000-0000-000000000003" 00:24:39.854 ], 00:24:39.854 "product_name": "passthru", 00:24:39.854 "block_size": 512, 00:24:39.854 "num_blocks": 65536, 00:24:39.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:39.854 "assigned_rate_limits": { 00:24:39.854 "rw_ios_per_sec": 0, 00:24:39.854 "rw_mbytes_per_sec": 0, 00:24:39.854 "r_mbytes_per_sec": 0, 00:24:39.854 "w_mbytes_per_sec": 0 00:24:39.854 }, 00:24:39.854 "claimed": true, 00:24:39.854 "claim_type": "exclusive_write", 00:24:39.854 "zoned": false, 00:24:39.854 "supported_io_types": { 00:24:39.854 "read": true, 00:24:39.854 "write": true, 00:24:39.854 "unmap": true, 00:24:39.854 "flush": true, 00:24:39.854 "reset": true, 00:24:39.854 "nvme_admin": false, 00:24:39.854 "nvme_io": false, 00:24:39.854 "nvme_io_md": false, 00:24:39.854 "write_zeroes": true, 00:24:39.854 "zcopy": true, 00:24:39.854 "get_zone_info": false, 00:24:39.854 "zone_management": false, 00:24:39.854 "zone_append": false, 00:24:39.854 "compare": false, 00:24:39.854 "compare_and_write": false, 00:24:39.854 "abort": true, 00:24:39.854 "seek_hole": false, 00:24:39.854 "seek_data": false, 00:24:39.854 "copy": true, 00:24:39.854 "nvme_iov_md": false 00:24:39.854 }, 00:24:39.854 "memory_domains": [ 00:24:39.854 { 00:24:39.854 "dma_device_id": "system", 00:24:39.854 "dma_device_type": 1 00:24:39.854 }, 00:24:39.854 { 00:24:39.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.854 "dma_device_type": 2 00:24:39.854 } 00:24:39.854 ], 00:24:39.854 "driver_specific": { 00:24:39.854 "passthru": { 00:24:39.854 "name": "pt3", 00:24:39.854 "base_bdev_name": "malloc3" 00:24:39.854 } 00:24:39.854 } 00:24:39.854 }' 00:24:39.854 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.113 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.371 16:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.372 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.372 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:40.372 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:40.372 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:40.631 "name": "pt4", 00:24:40.631 "aliases": [ 00:24:40.631 "00000000-0000-0000-0000-000000000004" 00:24:40.631 ], 00:24:40.631 "product_name": "passthru", 00:24:40.631 "block_size": 512, 00:24:40.631 "num_blocks": 65536, 00:24:40.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:40.631 "assigned_rate_limits": { 00:24:40.631 "rw_ios_per_sec": 0, 00:24:40.631 "rw_mbytes_per_sec": 0, 00:24:40.631 "r_mbytes_per_sec": 0, 00:24:40.631 "w_mbytes_per_sec": 0 00:24:40.631 }, 00:24:40.631 "claimed": true, 00:24:40.631 "claim_type": "exclusive_write", 00:24:40.631 "zoned": false, 00:24:40.631 "supported_io_types": { 00:24:40.631 "read": true, 00:24:40.631 "write": true, 00:24:40.631 "unmap": true, 00:24:40.631 "flush": true, 00:24:40.631 "reset": true, 00:24:40.631 "nvme_admin": false, 00:24:40.631 "nvme_io": false, 00:24:40.631 "nvme_io_md": false, 00:24:40.631 "write_zeroes": true, 00:24:40.631 "zcopy": true, 00:24:40.631 "get_zone_info": false, 00:24:40.631 "zone_management": false, 00:24:40.631 "zone_append": false, 00:24:40.631 "compare": false, 00:24:40.631 "compare_and_write": false, 00:24:40.631 "abort": true, 00:24:40.631 "seek_hole": false, 00:24:40.631 "seek_data": false, 00:24:40.631 "copy": true, 00:24:40.631 "nvme_iov_md": false 00:24:40.631 }, 00:24:40.631 "memory_domains": [ 00:24:40.631 { 00:24:40.631 "dma_device_id": "system", 00:24:40.631 "dma_device_type": 1 00:24:40.631 }, 00:24:40.631 { 00:24:40.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.631 "dma_device_type": 2 00:24:40.631 } 00:24:40.631 ], 00:24:40.631 "driver_specific": { 00:24:40.631 "passthru": { 00:24:40.631 "name": "pt4", 00:24:40.631 "base_bdev_name": "malloc4" 00:24:40.631 } 00:24:40.631 } 00:24:40.631 }' 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.631 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.891 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.891 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.891 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.891 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.891 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:40.891 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:24:41.151 [2024-07-24 16:41:37.810747] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5972448b-7b78-4994-8dae-40def48016df '!=' 5972448b-7b78-4994-8dae-40def48016df ']' 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1711382 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1711382 ']' 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1711382 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1711382 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1711382' 00:24:41.151 killing process with pid 1711382 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1711382 00:24:41.151 [2024-07-24 16:41:37.889916] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:41.151 [2024-07-24 16:41:37.890019] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.151 16:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1711382 00:24:41.151 [2024-07-24 16:41:37.890110] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.151 [2024-07-24 16:41:37.890127] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:24:41.720 [2024-07-24 16:41:38.342301] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:43.625 16:41:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:24:43.625 00:24:43.625 real 0m18.246s 00:24:43.625 user 0m30.840s 00:24:43.625 sys 0m2.987s 00:24:43.625 16:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:43.625 16:41:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.625 ************************************ 00:24:43.625 END TEST raid_superblock_test 00:24:43.625 ************************************ 00:24:43.625 16:41:40 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:24:43.625 16:41:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:43.625 16:41:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:43.625 16:41:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:43.625 ************************************ 00:24:43.625 START TEST raid_read_error_test 00:24:43.625 ************************************ 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.V38XGgYRuC 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1714662 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1714662 /var/tmp/spdk-raid.sock 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1714662 ']' 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:43.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.625 16:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.625 [2024-07-24 16:41:40.328945] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:24:43.626 [2024-07-24 16:41:40.329070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714662 ] 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:43.626 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.626 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:43.885 [2024-07-24 16:41:40.552940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.145 [2024-07-24 16:41:40.841439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.405 [2024-07-24 16:41:41.198503] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.405 [2024-07-24 16:41:41.198540] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.715 16:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.715 16:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:44.715 16:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:44.715 16:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:44.974 BaseBdev1_malloc 00:24:44.974 16:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:45.233 true 00:24:45.233 16:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:45.233 [2024-07-24 16:41:42.074940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:45.233 [2024-07-24 16:41:42.075000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.233 [2024-07-24 16:41:42.075026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:24:45.233 [2024-07-24 16:41:42.075049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.233 [2024-07-24 16:41:42.077819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.233 [2024-07-24 16:41:42.077859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:45.233 BaseBdev1 00:24:45.233 16:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:45.233 16:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:45.801 BaseBdev2_malloc 00:24:45.801 16:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:45.801 true 00:24:45.801 16:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:46.060 [2024-07-24 16:41:42.822041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:46.060 [2024-07-24 16:41:42.822100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.060 [2024-07-24 16:41:42.822125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:24:46.060 [2024-07-24 16:41:42.822154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.060 [2024-07-24 16:41:42.824902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.060 [2024-07-24 16:41:42.824941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:46.060 BaseBdev2 00:24:46.060 16:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:46.060 16:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:46.319 BaseBdev3_malloc 00:24:46.320 16:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:46.579 true 00:24:46.579 16:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:46.837 [2024-07-24 16:41:43.549224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:46.837 [2024-07-24 16:41:43.549284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.837 [2024-07-24 16:41:43.549311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:24:46.837 [2024-07-24 16:41:43.549329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.837 [2024-07-24 16:41:43.552113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.837 [2024-07-24 16:41:43.552157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:46.837 BaseBdev3 00:24:46.837 16:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:46.837 16:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:47.096 BaseBdev4_malloc 00:24:47.096 16:41:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:47.356 true 00:24:47.356 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:47.615 [2024-07-24 16:41:44.273690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:47.615 [2024-07-24 16:41:44.273753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.615 [2024-07-24 16:41:44.273781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:24:47.615 [2024-07-24 16:41:44.273799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.615 [2024-07-24 16:41:44.276773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.615 [2024-07-24 16:41:44.276813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:47.615 BaseBdev4 00:24:47.615 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:47.875 [2024-07-24 16:41:44.498342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:47.875 [2024-07-24 16:41:44.500697] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:47.875 [2024-07-24 16:41:44.500793] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:47.875 [2024-07-24 16:41:44.500875] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:47.875 [2024-07-24 16:41:44.501178] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:24:47.875 [2024-07-24 16:41:44.501200] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:47.875 [2024-07-24 16:41:44.501551] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:47.875 [2024-07-24 16:41:44.501808] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:24:47.875 [2024-07-24 16:41:44.501823] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:24:47.875 [2024-07-24 16:41:44.502017] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.875 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.134 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.134 "name": "raid_bdev1", 00:24:48.134 "uuid": "7c027f73-fc19-4637-94ac-c707833100cb", 00:24:48.134 "strip_size_kb": 64, 00:24:48.134 "state": "online", 00:24:48.134 "raid_level": "concat", 00:24:48.134 "superblock": true, 00:24:48.134 "num_base_bdevs": 4, 00:24:48.134 "num_base_bdevs_discovered": 4, 00:24:48.134 "num_base_bdevs_operational": 4, 00:24:48.134 "base_bdevs_list": [ 00:24:48.134 { 00:24:48.134 "name": "BaseBdev1", 00:24:48.134 "uuid": "92ce5571-464c-5096-acf3-1210f9b25aa9", 00:24:48.134 "is_configured": true, 00:24:48.134 "data_offset": 2048, 00:24:48.134 "data_size": 63488 00:24:48.134 }, 00:24:48.134 { 00:24:48.134 "name": "BaseBdev2", 00:24:48.134 "uuid": "14e2376c-b057-5560-b843-28b129fe94bc", 00:24:48.134 "is_configured": true, 00:24:48.134 "data_offset": 2048, 00:24:48.134 "data_size": 63488 00:24:48.134 }, 00:24:48.134 { 00:24:48.134 "name": "BaseBdev3", 00:24:48.134 "uuid": "ab7fc6fd-bacf-596e-8af7-550c8b311e8b", 00:24:48.134 "is_configured": true, 00:24:48.134 "data_offset": 2048, 00:24:48.134 "data_size": 63488 00:24:48.134 }, 00:24:48.134 { 00:24:48.134 "name": "BaseBdev4", 00:24:48.134 "uuid": "47efc285-de4a-5e50-a7d7-475380a8398c", 00:24:48.134 "is_configured": true, 00:24:48.134 "data_offset": 2048, 00:24:48.134 "data_size": 63488 00:24:48.134 } 00:24:48.134 ] 00:24:48.134 }' 00:24:48.134 16:41:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.134 16:41:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.701 16:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:24:48.701 16:41:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:48.701 [2024-07-24 16:41:45.394819] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:24:49.639 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.899 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.158 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.158 "name": "raid_bdev1", 00:24:50.158 "uuid": "7c027f73-fc19-4637-94ac-c707833100cb", 00:24:50.158 "strip_size_kb": 64, 00:24:50.158 "state": "online", 00:24:50.158 "raid_level": "concat", 00:24:50.158 "superblock": true, 00:24:50.158 "num_base_bdevs": 4, 00:24:50.158 "num_base_bdevs_discovered": 4, 00:24:50.158 "num_base_bdevs_operational": 4, 00:24:50.158 "base_bdevs_list": [ 00:24:50.158 { 00:24:50.159 "name": "BaseBdev1", 00:24:50.159 "uuid": "92ce5571-464c-5096-acf3-1210f9b25aa9", 00:24:50.159 "is_configured": true, 00:24:50.159 "data_offset": 2048, 00:24:50.159 "data_size": 63488 00:24:50.159 }, 00:24:50.159 { 00:24:50.159 "name": "BaseBdev2", 00:24:50.159 "uuid": "14e2376c-b057-5560-b843-28b129fe94bc", 00:24:50.159 "is_configured": true, 00:24:50.159 "data_offset": 2048, 00:24:50.159 "data_size": 63488 00:24:50.159 }, 00:24:50.159 { 00:24:50.159 "name": "BaseBdev3", 00:24:50.159 "uuid": "ab7fc6fd-bacf-596e-8af7-550c8b311e8b", 00:24:50.159 "is_configured": true, 00:24:50.159 "data_offset": 2048, 00:24:50.159 "data_size": 63488 00:24:50.159 }, 00:24:50.159 { 00:24:50.159 "name": "BaseBdev4", 00:24:50.159 "uuid": "47efc285-de4a-5e50-a7d7-475380a8398c", 00:24:50.159 "is_configured": true, 00:24:50.159 "data_offset": 2048, 00:24:50.159 "data_size": 63488 00:24:50.159 } 00:24:50.159 ] 00:24:50.159 }' 00:24:50.159 16:41:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.159 16:41:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.728 16:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:50.728 [2024-07-24 16:41:47.453756] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:50.728 [2024-07-24 16:41:47.453798] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.728 [2024-07-24 16:41:47.457065] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.728 [2024-07-24 16:41:47.457125] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:50.728 [2024-07-24 16:41:47.457187] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.729 [2024-07-24 16:41:47.457216] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:24:50.729 0 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1714662 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1714662 ']' 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1714662 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714662 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714662' 00:24:50.729 killing process with pid 1714662 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1714662 00:24:50.729 [2024-07-24 16:41:47.509271] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.729 16:41:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1714662 00:24:51.297 [2024-07-24 16:41:47.881061] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.V38XGgYRuC 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:24:53.204 00:24:53.204 real 0m9.524s 00:24:53.204 user 0m13.545s 00:24:53.204 sys 0m1.393s 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.204 16:41:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.204 ************************************ 00:24:53.204 END TEST raid_read_error_test 00:24:53.204 ************************************ 00:24:53.204 16:41:49 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:24:53.204 16:41:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:53.204 16:41:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.204 16:41:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:53.204 ************************************ 00:24:53.204 START TEST raid_write_error_test 00:24:53.204 ************************************ 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:53.204 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.OtHPAepvRu 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1716345 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1716345 /var/tmp/spdk-raid.sock 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1716345 ']' 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:53.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.205 16:41:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.205 [2024-07-24 16:41:49.949011] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:24:53.205 [2024-07-24 16:41:49.949133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716345 ] 00:24:53.464 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.464 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:53.464 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.464 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:53.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:53.465 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:53.465 [2024-07-24 16:41:50.178497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.724 [2024-07-24 16:41:50.450970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.984 [2024-07-24 16:41:50.807493] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:53.984 [2024-07-24 16:41:50.807534] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:54.244 16:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.244 16:41:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:24:54.244 16:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:54.244 16:41:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:54.540 BaseBdev1_malloc 00:24:54.541 16:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:54.799 true 00:24:54.799 16:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:55.059 [2024-07-24 16:41:51.687019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:55.059 [2024-07-24 16:41:51.687078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.059 [2024-07-24 16:41:51.687102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:24:55.059 [2024-07-24 16:41:51.687124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.059 [2024-07-24 16:41:51.689846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.059 [2024-07-24 16:41:51.689887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:55.059 BaseBdev1 00:24:55.059 16:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:55.059 16:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:55.318 BaseBdev2_malloc 00:24:55.318 16:41:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:55.576 true 00:24:55.576 16:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:55.576 [2024-07-24 16:41:52.422042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:55.576 [2024-07-24 16:41:52.422101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.576 [2024-07-24 16:41:52.422124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:24:55.576 [2024-07-24 16:41:52.422152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.576 [2024-07-24 16:41:52.424767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.576 [2024-07-24 16:41:52.424803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:55.576 BaseBdev2 00:24:55.834 16:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:55.834 16:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:56.092 BaseBdev3_malloc 00:24:56.092 16:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:56.092 true 00:24:56.092 16:41:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:56.350 [2024-07-24 16:41:53.139901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:56.350 [2024-07-24 16:41:53.139957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.350 [2024-07-24 16:41:53.139982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:24:56.350 [2024-07-24 16:41:53.140000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.350 [2024-07-24 16:41:53.142743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.350 [2024-07-24 16:41:53.142781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:56.350 BaseBdev3 00:24:56.350 16:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:24:56.350 16:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:56.609 BaseBdev4_malloc 00:24:56.609 16:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:56.867 true 00:24:56.867 16:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:57.126 [2024-07-24 16:41:53.877829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:57.126 [2024-07-24 16:41:53.877895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.126 [2024-07-24 16:41:53.877921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:24:57.126 [2024-07-24 16:41:53.877939] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.127 [2024-07-24 16:41:53.880602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.127 [2024-07-24 16:41:53.880638] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:57.127 BaseBdev4 00:24:57.127 16:41:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:57.385 [2024-07-24 16:41:54.106483] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:57.385 [2024-07-24 16:41:54.108815] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:57.385 [2024-07-24 16:41:54.108907] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:57.385 [2024-07-24 16:41:54.108988] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:57.385 [2024-07-24 16:41:54.109285] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:24:57.385 [2024-07-24 16:41:54.109306] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:57.385 [2024-07-24 16:41:54.109634] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:57.385 [2024-07-24 16:41:54.109886] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:24:57.385 [2024-07-24 16:41:54.109901] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:24:57.385 [2024-07-24 16:41:54.110075] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.385 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.646 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:57.646 "name": "raid_bdev1", 00:24:57.646 "uuid": "f90d61ac-68bb-4665-8c09-0a9640b84848", 00:24:57.646 "strip_size_kb": 64, 00:24:57.646 "state": "online", 00:24:57.646 "raid_level": "concat", 00:24:57.646 "superblock": true, 00:24:57.646 "num_base_bdevs": 4, 00:24:57.646 "num_base_bdevs_discovered": 4, 00:24:57.646 "num_base_bdevs_operational": 4, 00:24:57.646 "base_bdevs_list": [ 00:24:57.646 { 00:24:57.646 "name": "BaseBdev1", 00:24:57.646 "uuid": "8537a01e-bed4-52a8-9a56-41a8741b2163", 00:24:57.646 "is_configured": true, 00:24:57.646 "data_offset": 2048, 00:24:57.646 "data_size": 63488 00:24:57.646 }, 00:24:57.646 { 00:24:57.646 "name": "BaseBdev2", 00:24:57.646 "uuid": "484312cf-5e13-521c-bdf0-cae45461c672", 00:24:57.646 "is_configured": true, 00:24:57.647 "data_offset": 2048, 00:24:57.647 "data_size": 63488 00:24:57.647 }, 00:24:57.647 { 00:24:57.647 "name": "BaseBdev3", 00:24:57.647 "uuid": "9ac5d056-dde3-543e-adcf-f5a8813bbb25", 00:24:57.647 "is_configured": true, 00:24:57.647 "data_offset": 2048, 00:24:57.647 "data_size": 63488 00:24:57.647 }, 00:24:57.647 { 00:24:57.647 "name": "BaseBdev4", 00:24:57.647 "uuid": "23a9e6ba-aefe-5014-8225-db7bafb4882b", 00:24:57.647 "is_configured": true, 00:24:57.647 "data_offset": 2048, 00:24:57.647 "data_size": 63488 00:24:57.647 } 00:24:57.647 ] 00:24:57.647 }' 00:24:57.647 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:57.647 16:41:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:58.282 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:24:58.282 16:41:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:58.282 [2024-07-24 16:41:54.978901] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:24:59.219 16:41:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.478 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.478 "name": "raid_bdev1", 00:24:59.478 "uuid": "f90d61ac-68bb-4665-8c09-0a9640b84848", 00:24:59.478 "strip_size_kb": 64, 00:24:59.478 "state": "online", 00:24:59.478 "raid_level": "concat", 00:24:59.478 "superblock": true, 00:24:59.478 "num_base_bdevs": 4, 00:24:59.479 "num_base_bdevs_discovered": 4, 00:24:59.479 "num_base_bdevs_operational": 4, 00:24:59.479 "base_bdevs_list": [ 00:24:59.479 { 00:24:59.479 "name": "BaseBdev1", 00:24:59.479 "uuid": "8537a01e-bed4-52a8-9a56-41a8741b2163", 00:24:59.479 "is_configured": true, 00:24:59.479 "data_offset": 2048, 00:24:59.479 "data_size": 63488 00:24:59.479 }, 00:24:59.479 { 00:24:59.479 "name": "BaseBdev2", 00:24:59.479 "uuid": "484312cf-5e13-521c-bdf0-cae45461c672", 00:24:59.479 "is_configured": true, 00:24:59.479 "data_offset": 2048, 00:24:59.479 "data_size": 63488 00:24:59.479 }, 00:24:59.479 { 00:24:59.479 "name": "BaseBdev3", 00:24:59.479 "uuid": "9ac5d056-dde3-543e-adcf-f5a8813bbb25", 00:24:59.479 "is_configured": true, 00:24:59.479 "data_offset": 2048, 00:24:59.479 "data_size": 63488 00:24:59.479 }, 00:24:59.479 { 00:24:59.479 "name": "BaseBdev4", 00:24:59.479 "uuid": "23a9e6ba-aefe-5014-8225-db7bafb4882b", 00:24:59.479 "is_configured": true, 00:24:59.479 "data_offset": 2048, 00:24:59.479 "data_size": 63488 00:24:59.479 } 00:24:59.479 ] 00:24:59.479 }' 00:24:59.479 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.479 16:41:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.415 16:41:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:00.415 [2024-07-24 16:41:57.158827] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:00.415 [2024-07-24 16:41:57.158874] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.415 [2024-07-24 16:41:57.162178] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.415 [2024-07-24 16:41:57.162257] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.415 [2024-07-24 16:41:57.162312] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:00.415 [2024-07-24 16:41:57.162341] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:25:00.415 0 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1716345 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1716345 ']' 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1716345 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1716345 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1716345' 00:25:00.415 killing process with pid 1716345 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1716345 00:25:00.415 16:41:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1716345 00:25:00.415 [2024-07-24 16:41:57.267479] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:00.983 [2024-07-24 16:41:57.637632] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.OtHPAepvRu 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:25:02.890 00:25:02.890 real 0m9.630s 00:25:02.890 user 0m13.742s 00:25:02.890 sys 0m1.484s 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:02.890 16:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.890 ************************************ 00:25:02.890 END TEST raid_write_error_test 00:25:02.890 ************************************ 00:25:02.890 16:41:59 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:25:02.890 16:41:59 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:02.891 16:41:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:02.891 16:41:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:02.891 16:41:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:02.891 ************************************ 00:25:02.891 START TEST raid_state_function_test 00:25:02.891 ************************************ 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=1718035 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1718035' 00:25:02.891 Process raid pid: 1718035 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 1718035 /var/tmp/spdk-raid.sock 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 1718035 ']' 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:02.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.891 16:41:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:02.891 [2024-07-24 16:41:59.639209] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:25:02.891 [2024-07-24 16:41:59.639324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:03.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:03.151 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:03.151 [2024-07-24 16:41:59.868505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.410 [2024-07-24 16:42:00.158212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.670 [2024-07-24 16:42:00.524371] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.670 [2024-07-24 16:42:00.524407] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:03.928 16:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.928 16:42:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:03.928 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:04.188 [2024-07-24 16:42:00.851733] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:04.188 [2024-07-24 16:42:00.851787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:04.188 [2024-07-24 16:42:00.851802] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:04.188 [2024-07-24 16:42:00.851820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:04.188 [2024-07-24 16:42:00.851832] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:04.188 [2024-07-24 16:42:00.851848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:04.188 [2024-07-24 16:42:00.851859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:04.188 [2024-07-24 16:42:00.851875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.188 16:42:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:04.447 16:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:04.447 "name": "Existed_Raid", 00:25:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.447 "strip_size_kb": 0, 00:25:04.447 "state": "configuring", 00:25:04.447 "raid_level": "raid1", 00:25:04.447 "superblock": false, 00:25:04.447 "num_base_bdevs": 4, 00:25:04.447 "num_base_bdevs_discovered": 0, 00:25:04.447 "num_base_bdevs_operational": 4, 00:25:04.447 "base_bdevs_list": [ 00:25:04.447 { 00:25:04.447 "name": "BaseBdev1", 00:25:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.447 "is_configured": false, 00:25:04.447 "data_offset": 0, 00:25:04.447 "data_size": 0 00:25:04.447 }, 00:25:04.447 { 00:25:04.447 "name": "BaseBdev2", 00:25:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.447 "is_configured": false, 00:25:04.447 "data_offset": 0, 00:25:04.447 "data_size": 0 00:25:04.447 }, 00:25:04.447 { 00:25:04.447 "name": "BaseBdev3", 00:25:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.447 "is_configured": false, 00:25:04.447 "data_offset": 0, 00:25:04.447 "data_size": 0 00:25:04.447 }, 00:25:04.447 { 00:25:04.447 "name": "BaseBdev4", 00:25:04.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.447 "is_configured": false, 00:25:04.447 "data_offset": 0, 00:25:04.447 "data_size": 0 00:25:04.447 } 00:25:04.447 ] 00:25:04.447 }' 00:25:04.447 16:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:04.447 16:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:05.016 16:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:05.275 [2024-07-24 16:42:01.926521] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:05.275 [2024-07-24 16:42:01.926570] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:25:05.275 16:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:05.534 [2024-07-24 16:42:02.155199] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:05.534 [2024-07-24 16:42:02.155251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:05.534 [2024-07-24 16:42:02.155265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:05.534 [2024-07-24 16:42:02.155289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:05.534 [2024-07-24 16:42:02.155301] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:05.534 [2024-07-24 16:42:02.155317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:05.534 [2024-07-24 16:42:02.155328] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:05.534 [2024-07-24 16:42:02.155344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:05.534 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:05.793 [2024-07-24 16:42:02.440667] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.793 BaseBdev1 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:05.793 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.052 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:06.052 [ 00:25:06.052 { 00:25:06.052 "name": "BaseBdev1", 00:25:06.052 "aliases": [ 00:25:06.052 "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d" 00:25:06.052 ], 00:25:06.052 "product_name": "Malloc disk", 00:25:06.052 "block_size": 512, 00:25:06.052 "num_blocks": 65536, 00:25:06.052 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:06.052 "assigned_rate_limits": { 00:25:06.052 "rw_ios_per_sec": 0, 00:25:06.052 "rw_mbytes_per_sec": 0, 00:25:06.052 "r_mbytes_per_sec": 0, 00:25:06.052 "w_mbytes_per_sec": 0 00:25:06.053 }, 00:25:06.053 "claimed": true, 00:25:06.053 "claim_type": "exclusive_write", 00:25:06.053 "zoned": false, 00:25:06.053 "supported_io_types": { 00:25:06.053 "read": true, 00:25:06.053 "write": true, 00:25:06.053 "unmap": true, 00:25:06.053 "flush": true, 00:25:06.053 "reset": true, 00:25:06.053 "nvme_admin": false, 00:25:06.053 "nvme_io": false, 00:25:06.053 "nvme_io_md": false, 00:25:06.053 "write_zeroes": true, 00:25:06.053 "zcopy": true, 00:25:06.053 "get_zone_info": false, 00:25:06.053 "zone_management": false, 00:25:06.053 "zone_append": false, 00:25:06.053 "compare": false, 00:25:06.053 "compare_and_write": false, 00:25:06.053 "abort": true, 00:25:06.053 "seek_hole": false, 00:25:06.053 "seek_data": false, 00:25:06.053 "copy": true, 00:25:06.053 "nvme_iov_md": false 00:25:06.053 }, 00:25:06.053 "memory_domains": [ 00:25:06.053 { 00:25:06.053 "dma_device_id": "system", 00:25:06.053 "dma_device_type": 1 00:25:06.053 }, 00:25:06.053 { 00:25:06.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.053 "dma_device_type": 2 00:25:06.053 } 00:25:06.053 ], 00:25:06.053 "driver_specific": {} 00:25:06.053 } 00:25:06.053 ] 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.053 16:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.312 16:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.312 "name": "Existed_Raid", 00:25:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.312 "strip_size_kb": 0, 00:25:06.312 "state": "configuring", 00:25:06.312 "raid_level": "raid1", 00:25:06.312 "superblock": false, 00:25:06.312 "num_base_bdevs": 4, 00:25:06.312 "num_base_bdevs_discovered": 1, 00:25:06.312 "num_base_bdevs_operational": 4, 00:25:06.312 "base_bdevs_list": [ 00:25:06.312 { 00:25:06.312 "name": "BaseBdev1", 00:25:06.312 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:06.312 "is_configured": true, 00:25:06.312 "data_offset": 0, 00:25:06.312 "data_size": 65536 00:25:06.312 }, 00:25:06.312 { 00:25:06.312 "name": "BaseBdev2", 00:25:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.312 "is_configured": false, 00:25:06.312 "data_offset": 0, 00:25:06.312 "data_size": 0 00:25:06.312 }, 00:25:06.312 { 00:25:06.312 "name": "BaseBdev3", 00:25:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.312 "is_configured": false, 00:25:06.312 "data_offset": 0, 00:25:06.312 "data_size": 0 00:25:06.312 }, 00:25:06.312 { 00:25:06.312 "name": "BaseBdev4", 00:25:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.312 "is_configured": false, 00:25:06.312 "data_offset": 0, 00:25:06.312 "data_size": 0 00:25:06.312 } 00:25:06.312 ] 00:25:06.312 }' 00:25:06.312 16:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.312 16:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.880 16:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:07.139 [2024-07-24 16:42:03.904825] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:07.139 [2024-07-24 16:42:03.904879] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:25:07.139 16:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:07.398 [2024-07-24 16:42:04.129524] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.399 [2024-07-24 16:42:04.131824] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:07.399 [2024-07-24 16:42:04.131869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:07.399 [2024-07-24 16:42:04.131883] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:07.399 [2024-07-24 16:42:04.131900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:07.399 [2024-07-24 16:42:04.131912] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:07.399 [2024-07-24 16:42:04.131931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.399 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:07.659 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:07.659 "name": "Existed_Raid", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "strip_size_kb": 0, 00:25:07.659 "state": "configuring", 00:25:07.659 "raid_level": "raid1", 00:25:07.659 "superblock": false, 00:25:07.659 "num_base_bdevs": 4, 00:25:07.659 "num_base_bdevs_discovered": 1, 00:25:07.659 "num_base_bdevs_operational": 4, 00:25:07.659 "base_bdevs_list": [ 00:25:07.659 { 00:25:07.659 "name": "BaseBdev1", 00:25:07.659 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:07.659 "is_configured": true, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 65536 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "name": "BaseBdev2", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "is_configured": false, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 0 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "name": "BaseBdev3", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "is_configured": false, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 0 00:25:07.659 }, 00:25:07.659 { 00:25:07.659 "name": "BaseBdev4", 00:25:07.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.659 "is_configured": false, 00:25:07.659 "data_offset": 0, 00:25:07.659 "data_size": 0 00:25:07.659 } 00:25:07.659 ] 00:25:07.659 }' 00:25:07.659 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:07.659 16:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.228 16:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:08.487 [2024-07-24 16:42:05.218537] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:08.487 BaseBdev2 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:08.487 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:08.746 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:09.005 [ 00:25:09.005 { 00:25:09.005 "name": "BaseBdev2", 00:25:09.005 "aliases": [ 00:25:09.005 "fa7a2e10-3c0e-44f4-af6a-54997fe1d633" 00:25:09.005 ], 00:25:09.005 "product_name": "Malloc disk", 00:25:09.005 "block_size": 512, 00:25:09.005 "num_blocks": 65536, 00:25:09.005 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:09.005 "assigned_rate_limits": { 00:25:09.005 "rw_ios_per_sec": 0, 00:25:09.005 "rw_mbytes_per_sec": 0, 00:25:09.005 "r_mbytes_per_sec": 0, 00:25:09.005 "w_mbytes_per_sec": 0 00:25:09.005 }, 00:25:09.005 "claimed": true, 00:25:09.005 "claim_type": "exclusive_write", 00:25:09.005 "zoned": false, 00:25:09.005 "supported_io_types": { 00:25:09.005 "read": true, 00:25:09.005 "write": true, 00:25:09.005 "unmap": true, 00:25:09.005 "flush": true, 00:25:09.005 "reset": true, 00:25:09.005 "nvme_admin": false, 00:25:09.005 "nvme_io": false, 00:25:09.005 "nvme_io_md": false, 00:25:09.005 "write_zeroes": true, 00:25:09.005 "zcopy": true, 00:25:09.005 "get_zone_info": false, 00:25:09.005 "zone_management": false, 00:25:09.005 "zone_append": false, 00:25:09.005 "compare": false, 00:25:09.005 "compare_and_write": false, 00:25:09.005 "abort": true, 00:25:09.005 "seek_hole": false, 00:25:09.005 "seek_data": false, 00:25:09.005 "copy": true, 00:25:09.005 "nvme_iov_md": false 00:25:09.005 }, 00:25:09.005 "memory_domains": [ 00:25:09.005 { 00:25:09.005 "dma_device_id": "system", 00:25:09.005 "dma_device_type": 1 00:25:09.005 }, 00:25:09.005 { 00:25:09.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.005 "dma_device_type": 2 00:25:09.005 } 00:25:09.005 ], 00:25:09.005 "driver_specific": {} 00:25:09.005 } 00:25:09.005 ] 00:25:09.005 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:09.005 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:09.005 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:09.005 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:09.005 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.006 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.265 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.265 "name": "Existed_Raid", 00:25:09.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.265 "strip_size_kb": 0, 00:25:09.265 "state": "configuring", 00:25:09.265 "raid_level": "raid1", 00:25:09.265 "superblock": false, 00:25:09.265 "num_base_bdevs": 4, 00:25:09.265 "num_base_bdevs_discovered": 2, 00:25:09.265 "num_base_bdevs_operational": 4, 00:25:09.265 "base_bdevs_list": [ 00:25:09.265 { 00:25:09.265 "name": "BaseBdev1", 00:25:09.265 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:09.265 "is_configured": true, 00:25:09.265 "data_offset": 0, 00:25:09.265 "data_size": 65536 00:25:09.265 }, 00:25:09.265 { 00:25:09.265 "name": "BaseBdev2", 00:25:09.265 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:09.265 "is_configured": true, 00:25:09.265 "data_offset": 0, 00:25:09.265 "data_size": 65536 00:25:09.265 }, 00:25:09.265 { 00:25:09.265 "name": "BaseBdev3", 00:25:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.266 "is_configured": false, 00:25:09.266 "data_offset": 0, 00:25:09.266 "data_size": 0 00:25:09.266 }, 00:25:09.266 { 00:25:09.266 "name": "BaseBdev4", 00:25:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.266 "is_configured": false, 00:25:09.266 "data_offset": 0, 00:25:09.266 "data_size": 0 00:25:09.266 } 00:25:09.266 ] 00:25:09.266 }' 00:25:09.266 16:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.266 16:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:09.835 16:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:10.409 [2024-07-24 16:42:07.013465] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:10.409 BaseBdev3 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:10.410 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:10.979 [ 00:25:10.979 { 00:25:10.979 "name": "BaseBdev3", 00:25:10.979 "aliases": [ 00:25:10.979 "3635d4ea-d2ed-4f4b-a96b-034e2253c71a" 00:25:10.979 ], 00:25:10.979 "product_name": "Malloc disk", 00:25:10.979 "block_size": 512, 00:25:10.979 "num_blocks": 65536, 00:25:10.979 "uuid": "3635d4ea-d2ed-4f4b-a96b-034e2253c71a", 00:25:10.979 "assigned_rate_limits": { 00:25:10.979 "rw_ios_per_sec": 0, 00:25:10.979 "rw_mbytes_per_sec": 0, 00:25:10.979 "r_mbytes_per_sec": 0, 00:25:10.979 "w_mbytes_per_sec": 0 00:25:10.979 }, 00:25:10.979 "claimed": true, 00:25:10.979 "claim_type": "exclusive_write", 00:25:10.979 "zoned": false, 00:25:10.979 "supported_io_types": { 00:25:10.979 "read": true, 00:25:10.979 "write": true, 00:25:10.979 "unmap": true, 00:25:10.979 "flush": true, 00:25:10.979 "reset": true, 00:25:10.979 "nvme_admin": false, 00:25:10.979 "nvme_io": false, 00:25:10.979 "nvme_io_md": false, 00:25:10.979 "write_zeroes": true, 00:25:10.979 "zcopy": true, 00:25:10.979 "get_zone_info": false, 00:25:10.979 "zone_management": false, 00:25:10.979 "zone_append": false, 00:25:10.979 "compare": false, 00:25:10.979 "compare_and_write": false, 00:25:10.979 "abort": true, 00:25:10.979 "seek_hole": false, 00:25:10.979 "seek_data": false, 00:25:10.979 "copy": true, 00:25:10.979 "nvme_iov_md": false 00:25:10.979 }, 00:25:10.979 "memory_domains": [ 00:25:10.979 { 00:25:10.979 "dma_device_id": "system", 00:25:10.979 "dma_device_type": 1 00:25:10.979 }, 00:25:10.979 { 00:25:10.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.979 "dma_device_type": 2 00:25:10.979 } 00:25:10.979 ], 00:25:10.979 "driver_specific": {} 00:25:10.979 } 00:25:10.979 ] 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.979 16:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:11.239 16:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:11.239 "name": "Existed_Raid", 00:25:11.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.239 "strip_size_kb": 0, 00:25:11.239 "state": "configuring", 00:25:11.239 "raid_level": "raid1", 00:25:11.239 "superblock": false, 00:25:11.239 "num_base_bdevs": 4, 00:25:11.239 "num_base_bdevs_discovered": 3, 00:25:11.239 "num_base_bdevs_operational": 4, 00:25:11.239 "base_bdevs_list": [ 00:25:11.239 { 00:25:11.239 "name": "BaseBdev1", 00:25:11.239 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:11.239 "is_configured": true, 00:25:11.239 "data_offset": 0, 00:25:11.239 "data_size": 65536 00:25:11.239 }, 00:25:11.239 { 00:25:11.239 "name": "BaseBdev2", 00:25:11.239 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:11.239 "is_configured": true, 00:25:11.239 "data_offset": 0, 00:25:11.239 "data_size": 65536 00:25:11.239 }, 00:25:11.239 { 00:25:11.239 "name": "BaseBdev3", 00:25:11.239 "uuid": "3635d4ea-d2ed-4f4b-a96b-034e2253c71a", 00:25:11.239 "is_configured": true, 00:25:11.239 "data_offset": 0, 00:25:11.239 "data_size": 65536 00:25:11.239 }, 00:25:11.239 { 00:25:11.239 "name": "BaseBdev4", 00:25:11.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.239 "is_configured": false, 00:25:11.239 "data_offset": 0, 00:25:11.239 "data_size": 0 00:25:11.239 } 00:25:11.239 ] 00:25:11.239 }' 00:25:11.239 16:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:11.239 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.808 16:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:12.146 [2024-07-24 16:42:08.843380] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:12.146 [2024-07-24 16:42:08.843433] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:25:12.146 [2024-07-24 16:42:08.843450] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:12.146 [2024-07-24 16:42:08.843786] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:12.146 [2024-07-24 16:42:08.844023] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:25:12.146 [2024-07-24 16:42:08.844042] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:25:12.146 [2024-07-24 16:42:08.844341] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.146 BaseBdev4 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:12.146 16:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:12.405 16:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:12.665 [ 00:25:12.665 { 00:25:12.665 "name": "BaseBdev4", 00:25:12.665 "aliases": [ 00:25:12.665 "4dcec4cf-ee32-43a8-a194-9deb8243a678" 00:25:12.665 ], 00:25:12.665 "product_name": "Malloc disk", 00:25:12.665 "block_size": 512, 00:25:12.665 "num_blocks": 65536, 00:25:12.665 "uuid": "4dcec4cf-ee32-43a8-a194-9deb8243a678", 00:25:12.665 "assigned_rate_limits": { 00:25:12.665 "rw_ios_per_sec": 0, 00:25:12.665 "rw_mbytes_per_sec": 0, 00:25:12.665 "r_mbytes_per_sec": 0, 00:25:12.665 "w_mbytes_per_sec": 0 00:25:12.665 }, 00:25:12.665 "claimed": true, 00:25:12.665 "claim_type": "exclusive_write", 00:25:12.665 "zoned": false, 00:25:12.665 "supported_io_types": { 00:25:12.665 "read": true, 00:25:12.665 "write": true, 00:25:12.665 "unmap": true, 00:25:12.665 "flush": true, 00:25:12.665 "reset": true, 00:25:12.665 "nvme_admin": false, 00:25:12.665 "nvme_io": false, 00:25:12.665 "nvme_io_md": false, 00:25:12.665 "write_zeroes": true, 00:25:12.665 "zcopy": true, 00:25:12.665 "get_zone_info": false, 00:25:12.665 "zone_management": false, 00:25:12.665 "zone_append": false, 00:25:12.665 "compare": false, 00:25:12.665 "compare_and_write": false, 00:25:12.665 "abort": true, 00:25:12.665 "seek_hole": false, 00:25:12.665 "seek_data": false, 00:25:12.665 "copy": true, 00:25:12.665 "nvme_iov_md": false 00:25:12.665 }, 00:25:12.665 "memory_domains": [ 00:25:12.665 { 00:25:12.666 "dma_device_id": "system", 00:25:12.666 "dma_device_type": 1 00:25:12.666 }, 00:25:12.666 { 00:25:12.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.666 "dma_device_type": 2 00:25:12.666 } 00:25:12.666 ], 00:25:12.666 "driver_specific": {} 00:25:12.666 } 00:25:12.666 ] 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.666 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:13.234 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.234 "name": "Existed_Raid", 00:25:13.234 "uuid": "ef0cce88-8d9d-43c9-acf8-f1fbca293c9b", 00:25:13.234 "strip_size_kb": 0, 00:25:13.234 "state": "online", 00:25:13.234 "raid_level": "raid1", 00:25:13.234 "superblock": false, 00:25:13.234 "num_base_bdevs": 4, 00:25:13.234 "num_base_bdevs_discovered": 4, 00:25:13.234 "num_base_bdevs_operational": 4, 00:25:13.234 "base_bdevs_list": [ 00:25:13.234 { 00:25:13.234 "name": "BaseBdev1", 00:25:13.234 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:13.234 "is_configured": true, 00:25:13.234 "data_offset": 0, 00:25:13.234 "data_size": 65536 00:25:13.234 }, 00:25:13.234 { 00:25:13.234 "name": "BaseBdev2", 00:25:13.234 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:13.234 "is_configured": true, 00:25:13.234 "data_offset": 0, 00:25:13.234 "data_size": 65536 00:25:13.234 }, 00:25:13.234 { 00:25:13.234 "name": "BaseBdev3", 00:25:13.234 "uuid": "3635d4ea-d2ed-4f4b-a96b-034e2253c71a", 00:25:13.234 "is_configured": true, 00:25:13.234 "data_offset": 0, 00:25:13.234 "data_size": 65536 00:25:13.234 }, 00:25:13.234 { 00:25:13.234 "name": "BaseBdev4", 00:25:13.234 "uuid": "4dcec4cf-ee32-43a8-a194-9deb8243a678", 00:25:13.234 "is_configured": true, 00:25:13.234 "data_offset": 0, 00:25:13.234 "data_size": 65536 00:25:13.234 } 00:25:13.234 ] 00:25:13.234 }' 00:25:13.234 16:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.234 16:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:13.804 [2024-07-24 16:42:10.620671] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.804 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:13.804 "name": "Existed_Raid", 00:25:13.804 "aliases": [ 00:25:13.804 "ef0cce88-8d9d-43c9-acf8-f1fbca293c9b" 00:25:13.804 ], 00:25:13.804 "product_name": "Raid Volume", 00:25:13.804 "block_size": 512, 00:25:13.804 "num_blocks": 65536, 00:25:13.804 "uuid": "ef0cce88-8d9d-43c9-acf8-f1fbca293c9b", 00:25:13.804 "assigned_rate_limits": { 00:25:13.804 "rw_ios_per_sec": 0, 00:25:13.804 "rw_mbytes_per_sec": 0, 00:25:13.804 "r_mbytes_per_sec": 0, 00:25:13.804 "w_mbytes_per_sec": 0 00:25:13.804 }, 00:25:13.804 "claimed": false, 00:25:13.804 "zoned": false, 00:25:13.804 "supported_io_types": { 00:25:13.804 "read": true, 00:25:13.804 "write": true, 00:25:13.804 "unmap": false, 00:25:13.804 "flush": false, 00:25:13.804 "reset": true, 00:25:13.804 "nvme_admin": false, 00:25:13.804 "nvme_io": false, 00:25:13.804 "nvme_io_md": false, 00:25:13.804 "write_zeroes": true, 00:25:13.804 "zcopy": false, 00:25:13.804 "get_zone_info": false, 00:25:13.804 "zone_management": false, 00:25:13.804 "zone_append": false, 00:25:13.804 "compare": false, 00:25:13.804 "compare_and_write": false, 00:25:13.804 "abort": false, 00:25:13.804 "seek_hole": false, 00:25:13.804 "seek_data": false, 00:25:13.804 "copy": false, 00:25:13.804 "nvme_iov_md": false 00:25:13.804 }, 00:25:13.804 "memory_domains": [ 00:25:13.804 { 00:25:13.804 "dma_device_id": "system", 00:25:13.804 "dma_device_type": 1 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.804 "dma_device_type": 2 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "system", 00:25:13.804 "dma_device_type": 1 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.804 "dma_device_type": 2 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "system", 00:25:13.804 "dma_device_type": 1 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.804 "dma_device_type": 2 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "system", 00:25:13.804 "dma_device_type": 1 00:25:13.804 }, 00:25:13.804 { 00:25:13.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.804 "dma_device_type": 2 00:25:13.804 } 00:25:13.804 ], 00:25:13.804 "driver_specific": { 00:25:13.804 "raid": { 00:25:13.804 "uuid": "ef0cce88-8d9d-43c9-acf8-f1fbca293c9b", 00:25:13.804 "strip_size_kb": 0, 00:25:13.804 "state": "online", 00:25:13.804 "raid_level": "raid1", 00:25:13.804 "superblock": false, 00:25:13.804 "num_base_bdevs": 4, 00:25:13.804 "num_base_bdevs_discovered": 4, 00:25:13.804 "num_base_bdevs_operational": 4, 00:25:13.804 "base_bdevs_list": [ 00:25:13.804 { 00:25:13.804 "name": "BaseBdev1", 00:25:13.804 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:13.804 "is_configured": true, 00:25:13.804 "data_offset": 0, 00:25:13.804 "data_size": 65536 00:25:13.805 }, 00:25:13.805 { 00:25:13.805 "name": "BaseBdev2", 00:25:13.805 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:13.805 "is_configured": true, 00:25:13.805 "data_offset": 0, 00:25:13.805 "data_size": 65536 00:25:13.805 }, 00:25:13.805 { 00:25:13.805 "name": "BaseBdev3", 00:25:13.805 "uuid": "3635d4ea-d2ed-4f4b-a96b-034e2253c71a", 00:25:13.805 "is_configured": true, 00:25:13.805 "data_offset": 0, 00:25:13.805 "data_size": 65536 00:25:13.805 }, 00:25:13.805 { 00:25:13.805 "name": "BaseBdev4", 00:25:13.805 "uuid": "4dcec4cf-ee32-43a8-a194-9deb8243a678", 00:25:13.805 "is_configured": true, 00:25:13.805 "data_offset": 0, 00:25:13.805 "data_size": 65536 00:25:13.805 } 00:25:13.805 ] 00:25:13.805 } 00:25:13.805 } 00:25:13.805 }' 00:25:13.805 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:14.065 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:14.065 BaseBdev2 00:25:14.065 BaseBdev3 00:25:14.065 BaseBdev4' 00:25:14.065 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:14.065 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:14.065 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:14.065 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:14.065 "name": "BaseBdev1", 00:25:14.065 "aliases": [ 00:25:14.065 "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d" 00:25:14.065 ], 00:25:14.065 "product_name": "Malloc disk", 00:25:14.065 "block_size": 512, 00:25:14.065 "num_blocks": 65536, 00:25:14.065 "uuid": "7f242eaa-ecb1-40ed-a1b3-d90ffc85f78d", 00:25:14.065 "assigned_rate_limits": { 00:25:14.065 "rw_ios_per_sec": 0, 00:25:14.065 "rw_mbytes_per_sec": 0, 00:25:14.065 "r_mbytes_per_sec": 0, 00:25:14.065 "w_mbytes_per_sec": 0 00:25:14.065 }, 00:25:14.065 "claimed": true, 00:25:14.065 "claim_type": "exclusive_write", 00:25:14.065 "zoned": false, 00:25:14.065 "supported_io_types": { 00:25:14.065 "read": true, 00:25:14.065 "write": true, 00:25:14.065 "unmap": true, 00:25:14.065 "flush": true, 00:25:14.065 "reset": true, 00:25:14.065 "nvme_admin": false, 00:25:14.065 "nvme_io": false, 00:25:14.065 "nvme_io_md": false, 00:25:14.065 "write_zeroes": true, 00:25:14.065 "zcopy": true, 00:25:14.065 "get_zone_info": false, 00:25:14.065 "zone_management": false, 00:25:14.065 "zone_append": false, 00:25:14.065 "compare": false, 00:25:14.065 "compare_and_write": false, 00:25:14.065 "abort": true, 00:25:14.065 "seek_hole": false, 00:25:14.065 "seek_data": false, 00:25:14.065 "copy": true, 00:25:14.065 "nvme_iov_md": false 00:25:14.065 }, 00:25:14.065 "memory_domains": [ 00:25:14.065 { 00:25:14.065 "dma_device_id": "system", 00:25:14.065 "dma_device_type": 1 00:25:14.065 }, 00:25:14.065 { 00:25:14.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.065 "dma_device_type": 2 00:25:14.065 } 00:25:14.065 ], 00:25:14.065 "driver_specific": {} 00:25:14.065 }' 00:25:14.065 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.325 16:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.325 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.585 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:14.585 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:14.585 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:14.585 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:14.585 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:14.844 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:14.844 "name": "BaseBdev2", 00:25:14.844 "aliases": [ 00:25:14.844 "fa7a2e10-3c0e-44f4-af6a-54997fe1d633" 00:25:14.844 ], 00:25:14.844 "product_name": "Malloc disk", 00:25:14.844 "block_size": 512, 00:25:14.844 "num_blocks": 65536, 00:25:14.844 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:14.844 "assigned_rate_limits": { 00:25:14.844 "rw_ios_per_sec": 0, 00:25:14.844 "rw_mbytes_per_sec": 0, 00:25:14.844 "r_mbytes_per_sec": 0, 00:25:14.844 "w_mbytes_per_sec": 0 00:25:14.844 }, 00:25:14.844 "claimed": true, 00:25:14.844 "claim_type": "exclusive_write", 00:25:14.844 "zoned": false, 00:25:14.844 "supported_io_types": { 00:25:14.844 "read": true, 00:25:14.844 "write": true, 00:25:14.844 "unmap": true, 00:25:14.844 "flush": true, 00:25:14.844 "reset": true, 00:25:14.844 "nvme_admin": false, 00:25:14.844 "nvme_io": false, 00:25:14.844 "nvme_io_md": false, 00:25:14.844 "write_zeroes": true, 00:25:14.844 "zcopy": true, 00:25:14.844 "get_zone_info": false, 00:25:14.844 "zone_management": false, 00:25:14.844 "zone_append": false, 00:25:14.844 "compare": false, 00:25:14.844 "compare_and_write": false, 00:25:14.844 "abort": true, 00:25:14.844 "seek_hole": false, 00:25:14.844 "seek_data": false, 00:25:14.844 "copy": true, 00:25:14.844 "nvme_iov_md": false 00:25:14.844 }, 00:25:14.844 "memory_domains": [ 00:25:14.844 { 00:25:14.844 "dma_device_id": "system", 00:25:14.844 "dma_device_type": 1 00:25:14.844 }, 00:25:14.844 { 00:25:14.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:14.844 "dma_device_type": 2 00:25:14.844 } 00:25:14.844 ], 00:25:14.844 "driver_specific": {} 00:25:14.844 }' 00:25:14.844 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.844 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:14.844 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:14.844 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.845 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:14.845 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:14.845 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.845 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:14.845 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:14.845 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:15.104 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:15.104 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:15.104 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:15.104 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:15.104 16:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:15.363 "name": "BaseBdev3", 00:25:15.363 "aliases": [ 00:25:15.363 "3635d4ea-d2ed-4f4b-a96b-034e2253c71a" 00:25:15.363 ], 00:25:15.363 "product_name": "Malloc disk", 00:25:15.363 "block_size": 512, 00:25:15.363 "num_blocks": 65536, 00:25:15.363 "uuid": "3635d4ea-d2ed-4f4b-a96b-034e2253c71a", 00:25:15.363 "assigned_rate_limits": { 00:25:15.363 "rw_ios_per_sec": 0, 00:25:15.363 "rw_mbytes_per_sec": 0, 00:25:15.363 "r_mbytes_per_sec": 0, 00:25:15.363 "w_mbytes_per_sec": 0 00:25:15.363 }, 00:25:15.363 "claimed": true, 00:25:15.363 "claim_type": "exclusive_write", 00:25:15.363 "zoned": false, 00:25:15.363 "supported_io_types": { 00:25:15.363 "read": true, 00:25:15.363 "write": true, 00:25:15.363 "unmap": true, 00:25:15.363 "flush": true, 00:25:15.363 "reset": true, 00:25:15.363 "nvme_admin": false, 00:25:15.363 "nvme_io": false, 00:25:15.363 "nvme_io_md": false, 00:25:15.363 "write_zeroes": true, 00:25:15.363 "zcopy": true, 00:25:15.363 "get_zone_info": false, 00:25:15.363 "zone_management": false, 00:25:15.363 "zone_append": false, 00:25:15.363 "compare": false, 00:25:15.363 "compare_and_write": false, 00:25:15.363 "abort": true, 00:25:15.363 "seek_hole": false, 00:25:15.363 "seek_data": false, 00:25:15.363 "copy": true, 00:25:15.363 "nvme_iov_md": false 00:25:15.363 }, 00:25:15.363 "memory_domains": [ 00:25:15.363 { 00:25:15.363 "dma_device_id": "system", 00:25:15.363 "dma_device_type": 1 00:25:15.363 }, 00:25:15.363 { 00:25:15.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.363 "dma_device_type": 2 00:25:15.363 } 00:25:15.363 ], 00:25:15.363 "driver_specific": {} 00:25:15.363 }' 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:15.363 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:15.623 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:15.883 "name": "BaseBdev4", 00:25:15.883 "aliases": [ 00:25:15.883 "4dcec4cf-ee32-43a8-a194-9deb8243a678" 00:25:15.883 ], 00:25:15.883 "product_name": "Malloc disk", 00:25:15.883 "block_size": 512, 00:25:15.883 "num_blocks": 65536, 00:25:15.883 "uuid": "4dcec4cf-ee32-43a8-a194-9deb8243a678", 00:25:15.883 "assigned_rate_limits": { 00:25:15.883 "rw_ios_per_sec": 0, 00:25:15.883 "rw_mbytes_per_sec": 0, 00:25:15.883 "r_mbytes_per_sec": 0, 00:25:15.883 "w_mbytes_per_sec": 0 00:25:15.883 }, 00:25:15.883 "claimed": true, 00:25:15.883 "claim_type": "exclusive_write", 00:25:15.883 "zoned": false, 00:25:15.883 "supported_io_types": { 00:25:15.883 "read": true, 00:25:15.883 "write": true, 00:25:15.883 "unmap": true, 00:25:15.883 "flush": true, 00:25:15.883 "reset": true, 00:25:15.883 "nvme_admin": false, 00:25:15.883 "nvme_io": false, 00:25:15.883 "nvme_io_md": false, 00:25:15.883 "write_zeroes": true, 00:25:15.883 "zcopy": true, 00:25:15.883 "get_zone_info": false, 00:25:15.883 "zone_management": false, 00:25:15.883 "zone_append": false, 00:25:15.883 "compare": false, 00:25:15.883 "compare_and_write": false, 00:25:15.883 "abort": true, 00:25:15.883 "seek_hole": false, 00:25:15.883 "seek_data": false, 00:25:15.883 "copy": true, 00:25:15.883 "nvme_iov_md": false 00:25:15.883 }, 00:25:15.883 "memory_domains": [ 00:25:15.883 { 00:25:15.883 "dma_device_id": "system", 00:25:15.883 "dma_device_type": 1 00:25:15.883 }, 00:25:15.883 { 00:25:15.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:15.883 "dma_device_type": 2 00:25:15.883 } 00:25:15.883 ], 00:25:15.883 "driver_specific": {} 00:25:15.883 }' 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:15.883 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:16.143 16:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:16.403 [2024-07-24 16:42:13.111246] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.403 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:16.664 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:16.664 "name": "Existed_Raid", 00:25:16.664 "uuid": "ef0cce88-8d9d-43c9-acf8-f1fbca293c9b", 00:25:16.664 "strip_size_kb": 0, 00:25:16.664 "state": "online", 00:25:16.664 "raid_level": "raid1", 00:25:16.664 "superblock": false, 00:25:16.664 "num_base_bdevs": 4, 00:25:16.664 "num_base_bdevs_discovered": 3, 00:25:16.664 "num_base_bdevs_operational": 3, 00:25:16.664 "base_bdevs_list": [ 00:25:16.664 { 00:25:16.664 "name": null, 00:25:16.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.664 "is_configured": false, 00:25:16.664 "data_offset": 0, 00:25:16.664 "data_size": 65536 00:25:16.664 }, 00:25:16.664 { 00:25:16.664 "name": "BaseBdev2", 00:25:16.664 "uuid": "fa7a2e10-3c0e-44f4-af6a-54997fe1d633", 00:25:16.664 "is_configured": true, 00:25:16.664 "data_offset": 0, 00:25:16.664 "data_size": 65536 00:25:16.664 }, 00:25:16.664 { 00:25:16.664 "name": "BaseBdev3", 00:25:16.664 "uuid": "3635d4ea-d2ed-4f4b-a96b-034e2253c71a", 00:25:16.664 "is_configured": true, 00:25:16.664 "data_offset": 0, 00:25:16.664 "data_size": 65536 00:25:16.664 }, 00:25:16.664 { 00:25:16.664 "name": "BaseBdev4", 00:25:16.664 "uuid": "4dcec4cf-ee32-43a8-a194-9deb8243a678", 00:25:16.664 "is_configured": true, 00:25:16.664 "data_offset": 0, 00:25:16.664 "data_size": 65536 00:25:16.664 } 00:25:16.664 ] 00:25:16.664 }' 00:25:16.664 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:16.664 16:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.235 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:17.235 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:17.235 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.235 16:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:17.495 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:17.495 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:17.495 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:17.755 [2024-07-24 16:42:14.413781] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:17.755 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:17.755 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:17.755 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:17.755 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.014 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:18.014 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:18.014 16:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:18.274 [2024-07-24 16:42:14.990385] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:18.533 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:18.793 [2024-07-24 16:42:15.576234] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:18.793 [2024-07-24 16:42:15.576338] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:19.053 [2024-07-24 16:42:15.709589] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:19.053 [2024-07-24 16:42:15.709642] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:19.053 [2024-07-24 16:42:15.709661] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:25:19.053 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:19.053 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:19.053 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.053 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:19.312 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:19.312 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:19.312 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:19.312 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:19.312 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:19.312 16:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:19.572 BaseBdev2 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:19.572 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:19.831 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:19.831 [ 00:25:19.831 { 00:25:19.831 "name": "BaseBdev2", 00:25:19.831 "aliases": [ 00:25:19.831 "fd1fd506-f6f3-4de3-b399-84a3676104ff" 00:25:19.831 ], 00:25:19.831 "product_name": "Malloc disk", 00:25:19.831 "block_size": 512, 00:25:19.831 "num_blocks": 65536, 00:25:19.831 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:19.831 "assigned_rate_limits": { 00:25:19.831 "rw_ios_per_sec": 0, 00:25:19.831 "rw_mbytes_per_sec": 0, 00:25:19.831 "r_mbytes_per_sec": 0, 00:25:19.831 "w_mbytes_per_sec": 0 00:25:19.831 }, 00:25:19.831 "claimed": false, 00:25:19.831 "zoned": false, 00:25:19.831 "supported_io_types": { 00:25:19.831 "read": true, 00:25:19.831 "write": true, 00:25:19.831 "unmap": true, 00:25:19.831 "flush": true, 00:25:19.831 "reset": true, 00:25:19.831 "nvme_admin": false, 00:25:19.831 "nvme_io": false, 00:25:19.831 "nvme_io_md": false, 00:25:19.831 "write_zeroes": true, 00:25:19.831 "zcopy": true, 00:25:19.831 "get_zone_info": false, 00:25:19.831 "zone_management": false, 00:25:19.831 "zone_append": false, 00:25:19.831 "compare": false, 00:25:19.831 "compare_and_write": false, 00:25:19.831 "abort": true, 00:25:19.831 "seek_hole": false, 00:25:19.831 "seek_data": false, 00:25:19.831 "copy": true, 00:25:19.831 "nvme_iov_md": false 00:25:19.831 }, 00:25:19.831 "memory_domains": [ 00:25:19.831 { 00:25:19.832 "dma_device_id": "system", 00:25:19.832 "dma_device_type": 1 00:25:19.832 }, 00:25:19.832 { 00:25:19.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:19.832 "dma_device_type": 2 00:25:19.832 } 00:25:19.832 ], 00:25:19.832 "driver_specific": {} 00:25:19.832 } 00:25:19.832 ] 00:25:19.832 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:19.832 16:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:19.832 16:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:19.832 16:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:20.092 BaseBdev3 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:20.092 16:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:20.351 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:20.610 [ 00:25:20.610 { 00:25:20.610 "name": "BaseBdev3", 00:25:20.610 "aliases": [ 00:25:20.610 "9d05f5c4-7c41-4337-8c88-728f22016480" 00:25:20.610 ], 00:25:20.610 "product_name": "Malloc disk", 00:25:20.610 "block_size": 512, 00:25:20.610 "num_blocks": 65536, 00:25:20.610 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:20.610 "assigned_rate_limits": { 00:25:20.610 "rw_ios_per_sec": 0, 00:25:20.610 "rw_mbytes_per_sec": 0, 00:25:20.610 "r_mbytes_per_sec": 0, 00:25:20.610 "w_mbytes_per_sec": 0 00:25:20.610 }, 00:25:20.610 "claimed": false, 00:25:20.610 "zoned": false, 00:25:20.610 "supported_io_types": { 00:25:20.610 "read": true, 00:25:20.610 "write": true, 00:25:20.610 "unmap": true, 00:25:20.610 "flush": true, 00:25:20.610 "reset": true, 00:25:20.610 "nvme_admin": false, 00:25:20.610 "nvme_io": false, 00:25:20.610 "nvme_io_md": false, 00:25:20.610 "write_zeroes": true, 00:25:20.610 "zcopy": true, 00:25:20.610 "get_zone_info": false, 00:25:20.610 "zone_management": false, 00:25:20.610 "zone_append": false, 00:25:20.610 "compare": false, 00:25:20.610 "compare_and_write": false, 00:25:20.610 "abort": true, 00:25:20.610 "seek_hole": false, 00:25:20.610 "seek_data": false, 00:25:20.610 "copy": true, 00:25:20.610 "nvme_iov_md": false 00:25:20.610 }, 00:25:20.610 "memory_domains": [ 00:25:20.610 { 00:25:20.610 "dma_device_id": "system", 00:25:20.610 "dma_device_type": 1 00:25:20.610 }, 00:25:20.610 { 00:25:20.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:20.610 "dma_device_type": 2 00:25:20.610 } 00:25:20.610 ], 00:25:20.610 "driver_specific": {} 00:25:20.610 } 00:25:20.610 ] 00:25:20.610 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:20.610 16:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:20.610 16:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:20.610 16:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:20.869 BaseBdev4 00:25:20.869 16:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:20.869 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:20.869 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:20.870 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:20.870 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:20.870 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:20.870 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:21.129 16:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:21.388 [ 00:25:21.388 { 00:25:21.388 "name": "BaseBdev4", 00:25:21.388 "aliases": [ 00:25:21.388 "09a4ed0f-4b46-4ab2-b125-c3b2349d3773" 00:25:21.388 ], 00:25:21.388 "product_name": "Malloc disk", 00:25:21.388 "block_size": 512, 00:25:21.388 "num_blocks": 65536, 00:25:21.388 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:21.388 "assigned_rate_limits": { 00:25:21.388 "rw_ios_per_sec": 0, 00:25:21.388 "rw_mbytes_per_sec": 0, 00:25:21.388 "r_mbytes_per_sec": 0, 00:25:21.388 "w_mbytes_per_sec": 0 00:25:21.388 }, 00:25:21.388 "claimed": false, 00:25:21.388 "zoned": false, 00:25:21.388 "supported_io_types": { 00:25:21.388 "read": true, 00:25:21.388 "write": true, 00:25:21.388 "unmap": true, 00:25:21.388 "flush": true, 00:25:21.388 "reset": true, 00:25:21.388 "nvme_admin": false, 00:25:21.388 "nvme_io": false, 00:25:21.388 "nvme_io_md": false, 00:25:21.388 "write_zeroes": true, 00:25:21.388 "zcopy": true, 00:25:21.388 "get_zone_info": false, 00:25:21.388 "zone_management": false, 00:25:21.388 "zone_append": false, 00:25:21.388 "compare": false, 00:25:21.388 "compare_and_write": false, 00:25:21.388 "abort": true, 00:25:21.388 "seek_hole": false, 00:25:21.388 "seek_data": false, 00:25:21.388 "copy": true, 00:25:21.388 "nvme_iov_md": false 00:25:21.388 }, 00:25:21.388 "memory_domains": [ 00:25:21.388 { 00:25:21.388 "dma_device_id": "system", 00:25:21.388 "dma_device_type": 1 00:25:21.388 }, 00:25:21.388 { 00:25:21.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:21.388 "dma_device_type": 2 00:25:21.388 } 00:25:21.388 ], 00:25:21.388 "driver_specific": {} 00:25:21.388 } 00:25:21.388 ] 00:25:21.388 16:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:21.388 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:21.388 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:21.388 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:21.647 [2024-07-24 16:42:18.296783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:21.647 [2024-07-24 16:42:18.296835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:21.647 [2024-07-24 16:42:18.296864] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.647 [2024-07-24 16:42:18.299163] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:21.647 [2024-07-24 16:42:18.299218] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.647 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:21.906 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.906 "name": "Existed_Raid", 00:25:21.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.906 "strip_size_kb": 0, 00:25:21.906 "state": "configuring", 00:25:21.906 "raid_level": "raid1", 00:25:21.906 "superblock": false, 00:25:21.906 "num_base_bdevs": 4, 00:25:21.906 "num_base_bdevs_discovered": 3, 00:25:21.906 "num_base_bdevs_operational": 4, 00:25:21.906 "base_bdevs_list": [ 00:25:21.906 { 00:25:21.906 "name": "BaseBdev1", 00:25:21.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.906 "is_configured": false, 00:25:21.906 "data_offset": 0, 00:25:21.906 "data_size": 0 00:25:21.906 }, 00:25:21.906 { 00:25:21.906 "name": "BaseBdev2", 00:25:21.906 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:21.906 "is_configured": true, 00:25:21.906 "data_offset": 0, 00:25:21.906 "data_size": 65536 00:25:21.906 }, 00:25:21.906 { 00:25:21.906 "name": "BaseBdev3", 00:25:21.906 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:21.906 "is_configured": true, 00:25:21.906 "data_offset": 0, 00:25:21.906 "data_size": 65536 00:25:21.906 }, 00:25:21.906 { 00:25:21.906 "name": "BaseBdev4", 00:25:21.906 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:21.906 "is_configured": true, 00:25:21.906 "data_offset": 0, 00:25:21.906 "data_size": 65536 00:25:21.906 } 00:25:21.906 ] 00:25:21.906 }' 00:25:21.906 16:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.906 16:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.474 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:22.474 [2024-07-24 16:42:19.319502] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.733 "name": "Existed_Raid", 00:25:22.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.733 "strip_size_kb": 0, 00:25:22.733 "state": "configuring", 00:25:22.733 "raid_level": "raid1", 00:25:22.733 "superblock": false, 00:25:22.733 "num_base_bdevs": 4, 00:25:22.733 "num_base_bdevs_discovered": 2, 00:25:22.733 "num_base_bdevs_operational": 4, 00:25:22.733 "base_bdevs_list": [ 00:25:22.733 { 00:25:22.733 "name": "BaseBdev1", 00:25:22.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.733 "is_configured": false, 00:25:22.733 "data_offset": 0, 00:25:22.733 "data_size": 0 00:25:22.733 }, 00:25:22.733 { 00:25:22.733 "name": null, 00:25:22.733 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:22.733 "is_configured": false, 00:25:22.733 "data_offset": 0, 00:25:22.733 "data_size": 65536 00:25:22.733 }, 00:25:22.733 { 00:25:22.733 "name": "BaseBdev3", 00:25:22.733 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:22.733 "is_configured": true, 00:25:22.733 "data_offset": 0, 00:25:22.733 "data_size": 65536 00:25:22.733 }, 00:25:22.733 { 00:25:22.733 "name": "BaseBdev4", 00:25:22.733 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:22.733 "is_configured": true, 00:25:22.733 "data_offset": 0, 00:25:22.733 "data_size": 65536 00:25:22.733 } 00:25:22.733 ] 00:25:22.733 }' 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.733 16:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.302 16:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.302 16:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:23.560 16:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:23.560 16:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:23.820 [2024-07-24 16:42:20.630177] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.820 BaseBdev1 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:23.820 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:24.079 16:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:24.413 [ 00:25:24.413 { 00:25:24.413 "name": "BaseBdev1", 00:25:24.413 "aliases": [ 00:25:24.413 "68063f0f-a3fb-4097-b5d6-c9084ce28277" 00:25:24.413 ], 00:25:24.413 "product_name": "Malloc disk", 00:25:24.413 "block_size": 512, 00:25:24.413 "num_blocks": 65536, 00:25:24.413 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:24.413 "assigned_rate_limits": { 00:25:24.413 "rw_ios_per_sec": 0, 00:25:24.413 "rw_mbytes_per_sec": 0, 00:25:24.413 "r_mbytes_per_sec": 0, 00:25:24.413 "w_mbytes_per_sec": 0 00:25:24.413 }, 00:25:24.413 "claimed": true, 00:25:24.413 "claim_type": "exclusive_write", 00:25:24.413 "zoned": false, 00:25:24.413 "supported_io_types": { 00:25:24.413 "read": true, 00:25:24.413 "write": true, 00:25:24.413 "unmap": true, 00:25:24.413 "flush": true, 00:25:24.413 "reset": true, 00:25:24.413 "nvme_admin": false, 00:25:24.413 "nvme_io": false, 00:25:24.413 "nvme_io_md": false, 00:25:24.413 "write_zeroes": true, 00:25:24.413 "zcopy": true, 00:25:24.413 "get_zone_info": false, 00:25:24.413 "zone_management": false, 00:25:24.413 "zone_append": false, 00:25:24.413 "compare": false, 00:25:24.413 "compare_and_write": false, 00:25:24.413 "abort": true, 00:25:24.413 "seek_hole": false, 00:25:24.413 "seek_data": false, 00:25:24.413 "copy": true, 00:25:24.413 "nvme_iov_md": false 00:25:24.413 }, 00:25:24.413 "memory_domains": [ 00:25:24.413 { 00:25:24.413 "dma_device_id": "system", 00:25:24.413 "dma_device_type": 1 00:25:24.413 }, 00:25:24.413 { 00:25:24.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:24.413 "dma_device_type": 2 00:25:24.413 } 00:25:24.413 ], 00:25:24.413 "driver_specific": {} 00:25:24.413 } 00:25:24.413 ] 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:24.413 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.413 "name": "Existed_Raid", 00:25:24.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.413 "strip_size_kb": 0, 00:25:24.414 "state": "configuring", 00:25:24.414 "raid_level": "raid1", 00:25:24.414 "superblock": false, 00:25:24.414 "num_base_bdevs": 4, 00:25:24.414 "num_base_bdevs_discovered": 3, 00:25:24.414 "num_base_bdevs_operational": 4, 00:25:24.414 "base_bdevs_list": [ 00:25:24.414 { 00:25:24.414 "name": "BaseBdev1", 00:25:24.414 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:24.414 "is_configured": true, 00:25:24.414 "data_offset": 0, 00:25:24.414 "data_size": 65536 00:25:24.414 }, 00:25:24.414 { 00:25:24.414 "name": null, 00:25:24.414 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:24.414 "is_configured": false, 00:25:24.414 "data_offset": 0, 00:25:24.414 "data_size": 65536 00:25:24.414 }, 00:25:24.414 { 00:25:24.414 "name": "BaseBdev3", 00:25:24.414 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:24.414 "is_configured": true, 00:25:24.414 "data_offset": 0, 00:25:24.414 "data_size": 65536 00:25:24.414 }, 00:25:24.414 { 00:25:24.414 "name": "BaseBdev4", 00:25:24.414 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:24.414 "is_configured": true, 00:25:24.414 "data_offset": 0, 00:25:24.414 "data_size": 65536 00:25:24.414 } 00:25:24.414 ] 00:25:24.414 }' 00:25:24.414 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.414 16:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.351 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.352 16:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:25.352 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:25.352 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:25.611 [2024-07-24 16:42:22.290741] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.611 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.870 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.870 "name": "Existed_Raid", 00:25:25.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.870 "strip_size_kb": 0, 00:25:25.870 "state": "configuring", 00:25:25.870 "raid_level": "raid1", 00:25:25.870 "superblock": false, 00:25:25.870 "num_base_bdevs": 4, 00:25:25.870 "num_base_bdevs_discovered": 2, 00:25:25.870 "num_base_bdevs_operational": 4, 00:25:25.870 "base_bdevs_list": [ 00:25:25.870 { 00:25:25.870 "name": "BaseBdev1", 00:25:25.870 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:25.870 "is_configured": true, 00:25:25.870 "data_offset": 0, 00:25:25.870 "data_size": 65536 00:25:25.870 }, 00:25:25.870 { 00:25:25.870 "name": null, 00:25:25.870 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:25.870 "is_configured": false, 00:25:25.870 "data_offset": 0, 00:25:25.870 "data_size": 65536 00:25:25.870 }, 00:25:25.870 { 00:25:25.870 "name": null, 00:25:25.870 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:25.870 "is_configured": false, 00:25:25.870 "data_offset": 0, 00:25:25.870 "data_size": 65536 00:25:25.870 }, 00:25:25.870 { 00:25:25.870 "name": "BaseBdev4", 00:25:25.870 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:25.870 "is_configured": true, 00:25:25.870 "data_offset": 0, 00:25:25.870 "data_size": 65536 00:25:25.870 } 00:25:25.870 ] 00:25:25.870 }' 00:25:25.870 16:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.870 16:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:26.437 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.438 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:26.697 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:26.697 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:26.697 [2024-07-24 16:42:23.546123] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.955 "name": "Existed_Raid", 00:25:26.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.955 "strip_size_kb": 0, 00:25:26.955 "state": "configuring", 00:25:26.955 "raid_level": "raid1", 00:25:26.955 "superblock": false, 00:25:26.955 "num_base_bdevs": 4, 00:25:26.955 "num_base_bdevs_discovered": 3, 00:25:26.955 "num_base_bdevs_operational": 4, 00:25:26.955 "base_bdevs_list": [ 00:25:26.955 { 00:25:26.955 "name": "BaseBdev1", 00:25:26.955 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:26.955 "is_configured": true, 00:25:26.955 "data_offset": 0, 00:25:26.955 "data_size": 65536 00:25:26.955 }, 00:25:26.955 { 00:25:26.955 "name": null, 00:25:26.955 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:26.955 "is_configured": false, 00:25:26.955 "data_offset": 0, 00:25:26.955 "data_size": 65536 00:25:26.955 }, 00:25:26.955 { 00:25:26.955 "name": "BaseBdev3", 00:25:26.955 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:26.955 "is_configured": true, 00:25:26.955 "data_offset": 0, 00:25:26.955 "data_size": 65536 00:25:26.955 }, 00:25:26.955 { 00:25:26.955 "name": "BaseBdev4", 00:25:26.955 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:26.955 "is_configured": true, 00:25:26.955 "data_offset": 0, 00:25:26.955 "data_size": 65536 00:25:26.955 } 00:25:26.955 ] 00:25:26.955 }' 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.955 16:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.891 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.891 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:27.891 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:27.891 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:28.150 [2024-07-24 16:42:24.821653] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.150 16:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.410 16:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.410 "name": "Existed_Raid", 00:25:28.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.410 "strip_size_kb": 0, 00:25:28.410 "state": "configuring", 00:25:28.410 "raid_level": "raid1", 00:25:28.410 "superblock": false, 00:25:28.410 "num_base_bdevs": 4, 00:25:28.410 "num_base_bdevs_discovered": 2, 00:25:28.410 "num_base_bdevs_operational": 4, 00:25:28.410 "base_bdevs_list": [ 00:25:28.410 { 00:25:28.410 "name": null, 00:25:28.410 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:28.410 "is_configured": false, 00:25:28.410 "data_offset": 0, 00:25:28.410 "data_size": 65536 00:25:28.410 }, 00:25:28.410 { 00:25:28.410 "name": null, 00:25:28.410 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:28.410 "is_configured": false, 00:25:28.410 "data_offset": 0, 00:25:28.410 "data_size": 65536 00:25:28.410 }, 00:25:28.410 { 00:25:28.410 "name": "BaseBdev3", 00:25:28.410 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:28.410 "is_configured": true, 00:25:28.410 "data_offset": 0, 00:25:28.410 "data_size": 65536 00:25:28.410 }, 00:25:28.410 { 00:25:28.410 "name": "BaseBdev4", 00:25:28.410 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:28.410 "is_configured": true, 00:25:28.410 "data_offset": 0, 00:25:28.410 "data_size": 65536 00:25:28.410 } 00:25:28.410 ] 00:25:28.410 }' 00:25:28.410 16:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.410 16:42:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.978 16:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.978 16:42:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:29.237 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:29.237 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:29.496 [2024-07-24 16:42:26.214098] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.496 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.755 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.755 "name": "Existed_Raid", 00:25:29.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.755 "strip_size_kb": 0, 00:25:29.755 "state": "configuring", 00:25:29.755 "raid_level": "raid1", 00:25:29.755 "superblock": false, 00:25:29.755 "num_base_bdevs": 4, 00:25:29.755 "num_base_bdevs_discovered": 3, 00:25:29.756 "num_base_bdevs_operational": 4, 00:25:29.756 "base_bdevs_list": [ 00:25:29.756 { 00:25:29.756 "name": null, 00:25:29.756 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:29.756 "is_configured": false, 00:25:29.756 "data_offset": 0, 00:25:29.756 "data_size": 65536 00:25:29.756 }, 00:25:29.756 { 00:25:29.756 "name": "BaseBdev2", 00:25:29.756 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:29.756 "is_configured": true, 00:25:29.756 "data_offset": 0, 00:25:29.756 "data_size": 65536 00:25:29.756 }, 00:25:29.756 { 00:25:29.756 "name": "BaseBdev3", 00:25:29.756 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:29.756 "is_configured": true, 00:25:29.756 "data_offset": 0, 00:25:29.756 "data_size": 65536 00:25:29.756 }, 00:25:29.756 { 00:25:29.756 "name": "BaseBdev4", 00:25:29.756 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:29.756 "is_configured": true, 00:25:29.756 "data_offset": 0, 00:25:29.756 "data_size": 65536 00:25:29.756 } 00:25:29.756 ] 00:25:29.756 }' 00:25:29.756 16:42:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.756 16:42:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.324 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:30.324 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.584 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:30.584 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.584 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:30.842 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 68063f0f-a3fb-4097-b5d6-c9084ce28277 00:25:31.102 [2024-07-24 16:42:27.784176] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:31.102 [2024-07-24 16:42:27.784223] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:25:31.102 [2024-07-24 16:42:27.784241] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:31.102 [2024-07-24 16:42:27.784569] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:25:31.102 [2024-07-24 16:42:27.784792] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:25:31.102 [2024-07-24 16:42:27.784807] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:25:31.102 [2024-07-24 16:42:27.785119] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:31.102 NewBaseBdev 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:31.102 16:42:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:31.361 16:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:31.361 [ 00:25:31.361 { 00:25:31.361 "name": "NewBaseBdev", 00:25:31.361 "aliases": [ 00:25:31.361 "68063f0f-a3fb-4097-b5d6-c9084ce28277" 00:25:31.361 ], 00:25:31.361 "product_name": "Malloc disk", 00:25:31.361 "block_size": 512, 00:25:31.361 "num_blocks": 65536, 00:25:31.361 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:31.361 "assigned_rate_limits": { 00:25:31.361 "rw_ios_per_sec": 0, 00:25:31.361 "rw_mbytes_per_sec": 0, 00:25:31.361 "r_mbytes_per_sec": 0, 00:25:31.361 "w_mbytes_per_sec": 0 00:25:31.361 }, 00:25:31.361 "claimed": true, 00:25:31.361 "claim_type": "exclusive_write", 00:25:31.361 "zoned": false, 00:25:31.361 "supported_io_types": { 00:25:31.361 "read": true, 00:25:31.361 "write": true, 00:25:31.361 "unmap": true, 00:25:31.361 "flush": true, 00:25:31.361 "reset": true, 00:25:31.361 "nvme_admin": false, 00:25:31.361 "nvme_io": false, 00:25:31.361 "nvme_io_md": false, 00:25:31.361 "write_zeroes": true, 00:25:31.361 "zcopy": true, 00:25:31.361 "get_zone_info": false, 00:25:31.361 "zone_management": false, 00:25:31.361 "zone_append": false, 00:25:31.361 "compare": false, 00:25:31.361 "compare_and_write": false, 00:25:31.361 "abort": true, 00:25:31.361 "seek_hole": false, 00:25:31.361 "seek_data": false, 00:25:31.361 "copy": true, 00:25:31.361 "nvme_iov_md": false 00:25:31.361 }, 00:25:31.361 "memory_domains": [ 00:25:31.361 { 00:25:31.361 "dma_device_id": "system", 00:25:31.361 "dma_device_type": 1 00:25:31.361 }, 00:25:31.361 { 00:25:31.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.361 "dma_device_type": 2 00:25:31.361 } 00:25:31.361 ], 00:25:31.361 "driver_specific": {} 00:25:31.361 } 00:25:31.361 ] 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:31.620 "name": "Existed_Raid", 00:25:31.620 "uuid": "b6660817-9020-4004-9cd9-2407988b3daa", 00:25:31.620 "strip_size_kb": 0, 00:25:31.620 "state": "online", 00:25:31.620 "raid_level": "raid1", 00:25:31.620 "superblock": false, 00:25:31.620 "num_base_bdevs": 4, 00:25:31.620 "num_base_bdevs_discovered": 4, 00:25:31.620 "num_base_bdevs_operational": 4, 00:25:31.620 "base_bdevs_list": [ 00:25:31.620 { 00:25:31.620 "name": "NewBaseBdev", 00:25:31.620 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:31.620 "is_configured": true, 00:25:31.620 "data_offset": 0, 00:25:31.620 "data_size": 65536 00:25:31.620 }, 00:25:31.620 { 00:25:31.620 "name": "BaseBdev2", 00:25:31.620 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:31.620 "is_configured": true, 00:25:31.620 "data_offset": 0, 00:25:31.620 "data_size": 65536 00:25:31.620 }, 00:25:31.620 { 00:25:31.620 "name": "BaseBdev3", 00:25:31.620 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:31.620 "is_configured": true, 00:25:31.620 "data_offset": 0, 00:25:31.620 "data_size": 65536 00:25:31.620 }, 00:25:31.620 { 00:25:31.620 "name": "BaseBdev4", 00:25:31.620 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:31.620 "is_configured": true, 00:25:31.620 "data_offset": 0, 00:25:31.620 "data_size": 65536 00:25:31.620 } 00:25:31.620 ] 00:25:31.620 }' 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:31.620 16:42:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:32.189 16:42:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:32.447 [2024-07-24 16:42:29.192461] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:32.447 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:32.448 "name": "Existed_Raid", 00:25:32.448 "aliases": [ 00:25:32.448 "b6660817-9020-4004-9cd9-2407988b3daa" 00:25:32.448 ], 00:25:32.448 "product_name": "Raid Volume", 00:25:32.448 "block_size": 512, 00:25:32.448 "num_blocks": 65536, 00:25:32.448 "uuid": "b6660817-9020-4004-9cd9-2407988b3daa", 00:25:32.448 "assigned_rate_limits": { 00:25:32.448 "rw_ios_per_sec": 0, 00:25:32.448 "rw_mbytes_per_sec": 0, 00:25:32.448 "r_mbytes_per_sec": 0, 00:25:32.448 "w_mbytes_per_sec": 0 00:25:32.448 }, 00:25:32.448 "claimed": false, 00:25:32.448 "zoned": false, 00:25:32.448 "supported_io_types": { 00:25:32.448 "read": true, 00:25:32.448 "write": true, 00:25:32.448 "unmap": false, 00:25:32.448 "flush": false, 00:25:32.448 "reset": true, 00:25:32.448 "nvme_admin": false, 00:25:32.448 "nvme_io": false, 00:25:32.448 "nvme_io_md": false, 00:25:32.448 "write_zeroes": true, 00:25:32.448 "zcopy": false, 00:25:32.448 "get_zone_info": false, 00:25:32.448 "zone_management": false, 00:25:32.448 "zone_append": false, 00:25:32.448 "compare": false, 00:25:32.448 "compare_and_write": false, 00:25:32.448 "abort": false, 00:25:32.448 "seek_hole": false, 00:25:32.448 "seek_data": false, 00:25:32.448 "copy": false, 00:25:32.448 "nvme_iov_md": false 00:25:32.448 }, 00:25:32.448 "memory_domains": [ 00:25:32.448 { 00:25:32.448 "dma_device_id": "system", 00:25:32.448 "dma_device_type": 1 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.448 "dma_device_type": 2 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "system", 00:25:32.448 "dma_device_type": 1 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.448 "dma_device_type": 2 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "system", 00:25:32.448 "dma_device_type": 1 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.448 "dma_device_type": 2 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "system", 00:25:32.448 "dma_device_type": 1 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.448 "dma_device_type": 2 00:25:32.448 } 00:25:32.448 ], 00:25:32.448 "driver_specific": { 00:25:32.448 "raid": { 00:25:32.448 "uuid": "b6660817-9020-4004-9cd9-2407988b3daa", 00:25:32.448 "strip_size_kb": 0, 00:25:32.448 "state": "online", 00:25:32.448 "raid_level": "raid1", 00:25:32.448 "superblock": false, 00:25:32.448 "num_base_bdevs": 4, 00:25:32.448 "num_base_bdevs_discovered": 4, 00:25:32.448 "num_base_bdevs_operational": 4, 00:25:32.448 "base_bdevs_list": [ 00:25:32.448 { 00:25:32.448 "name": "NewBaseBdev", 00:25:32.448 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:32.448 "is_configured": true, 00:25:32.448 "data_offset": 0, 00:25:32.448 "data_size": 65536 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "name": "BaseBdev2", 00:25:32.448 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:32.448 "is_configured": true, 00:25:32.448 "data_offset": 0, 00:25:32.448 "data_size": 65536 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "name": "BaseBdev3", 00:25:32.448 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:32.448 "is_configured": true, 00:25:32.448 "data_offset": 0, 00:25:32.448 "data_size": 65536 00:25:32.448 }, 00:25:32.448 { 00:25:32.448 "name": "BaseBdev4", 00:25:32.448 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:32.448 "is_configured": true, 00:25:32.448 "data_offset": 0, 00:25:32.448 "data_size": 65536 00:25:32.448 } 00:25:32.448 ] 00:25:32.448 } 00:25:32.448 } 00:25:32.448 }' 00:25:32.448 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:32.448 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:32.448 BaseBdev2 00:25:32.448 BaseBdev3 00:25:32.448 BaseBdev4' 00:25:32.448 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.448 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:32.448 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:32.707 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.707 "name": "NewBaseBdev", 00:25:32.707 "aliases": [ 00:25:32.707 "68063f0f-a3fb-4097-b5d6-c9084ce28277" 00:25:32.707 ], 00:25:32.707 "product_name": "Malloc disk", 00:25:32.707 "block_size": 512, 00:25:32.707 "num_blocks": 65536, 00:25:32.707 "uuid": "68063f0f-a3fb-4097-b5d6-c9084ce28277", 00:25:32.707 "assigned_rate_limits": { 00:25:32.707 "rw_ios_per_sec": 0, 00:25:32.707 "rw_mbytes_per_sec": 0, 00:25:32.707 "r_mbytes_per_sec": 0, 00:25:32.707 "w_mbytes_per_sec": 0 00:25:32.707 }, 00:25:32.707 "claimed": true, 00:25:32.707 "claim_type": "exclusive_write", 00:25:32.707 "zoned": false, 00:25:32.707 "supported_io_types": { 00:25:32.707 "read": true, 00:25:32.707 "write": true, 00:25:32.707 "unmap": true, 00:25:32.707 "flush": true, 00:25:32.707 "reset": true, 00:25:32.707 "nvme_admin": false, 00:25:32.707 "nvme_io": false, 00:25:32.707 "nvme_io_md": false, 00:25:32.707 "write_zeroes": true, 00:25:32.707 "zcopy": true, 00:25:32.707 "get_zone_info": false, 00:25:32.707 "zone_management": false, 00:25:32.707 "zone_append": false, 00:25:32.707 "compare": false, 00:25:32.707 "compare_and_write": false, 00:25:32.707 "abort": true, 00:25:32.707 "seek_hole": false, 00:25:32.707 "seek_data": false, 00:25:32.707 "copy": true, 00:25:32.707 "nvme_iov_md": false 00:25:32.707 }, 00:25:32.707 "memory_domains": [ 00:25:32.707 { 00:25:32.707 "dma_device_id": "system", 00:25:32.707 "dma_device_type": 1 00:25:32.707 }, 00:25:32.707 { 00:25:32.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.707 "dma_device_type": 2 00:25:32.707 } 00:25:32.707 ], 00:25:32.707 "driver_specific": {} 00:25:32.707 }' 00:25:32.707 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.707 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.966 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.224 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:33.225 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:33.225 16:42:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:33.225 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:33.225 "name": "BaseBdev2", 00:25:33.225 "aliases": [ 00:25:33.225 "fd1fd506-f6f3-4de3-b399-84a3676104ff" 00:25:33.225 ], 00:25:33.225 "product_name": "Malloc disk", 00:25:33.225 "block_size": 512, 00:25:33.225 "num_blocks": 65536, 00:25:33.225 "uuid": "fd1fd506-f6f3-4de3-b399-84a3676104ff", 00:25:33.225 "assigned_rate_limits": { 00:25:33.225 "rw_ios_per_sec": 0, 00:25:33.225 "rw_mbytes_per_sec": 0, 00:25:33.225 "r_mbytes_per_sec": 0, 00:25:33.225 "w_mbytes_per_sec": 0 00:25:33.225 }, 00:25:33.225 "claimed": true, 00:25:33.225 "claim_type": "exclusive_write", 00:25:33.225 "zoned": false, 00:25:33.225 "supported_io_types": { 00:25:33.225 "read": true, 00:25:33.225 "write": true, 00:25:33.225 "unmap": true, 00:25:33.225 "flush": true, 00:25:33.225 "reset": true, 00:25:33.225 "nvme_admin": false, 00:25:33.225 "nvme_io": false, 00:25:33.225 "nvme_io_md": false, 00:25:33.225 "write_zeroes": true, 00:25:33.225 "zcopy": true, 00:25:33.225 "get_zone_info": false, 00:25:33.225 "zone_management": false, 00:25:33.225 "zone_append": false, 00:25:33.225 "compare": false, 00:25:33.225 "compare_and_write": false, 00:25:33.225 "abort": true, 00:25:33.225 "seek_hole": false, 00:25:33.225 "seek_data": false, 00:25:33.225 "copy": true, 00:25:33.225 "nvme_iov_md": false 00:25:33.225 }, 00:25:33.225 "memory_domains": [ 00:25:33.225 { 00:25:33.225 "dma_device_id": "system", 00:25:33.225 "dma_device_type": 1 00:25:33.225 }, 00:25:33.225 { 00:25:33.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.225 "dma_device_type": 2 00:25:33.225 } 00:25:33.225 ], 00:25:33.225 "driver_specific": {} 00:25:33.225 }' 00:25:33.225 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.483 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.742 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.742 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.742 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:33.742 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:33.742 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:34.001 "name": "BaseBdev3", 00:25:34.001 "aliases": [ 00:25:34.001 "9d05f5c4-7c41-4337-8c88-728f22016480" 00:25:34.001 ], 00:25:34.001 "product_name": "Malloc disk", 00:25:34.001 "block_size": 512, 00:25:34.001 "num_blocks": 65536, 00:25:34.001 "uuid": "9d05f5c4-7c41-4337-8c88-728f22016480", 00:25:34.001 "assigned_rate_limits": { 00:25:34.001 "rw_ios_per_sec": 0, 00:25:34.001 "rw_mbytes_per_sec": 0, 00:25:34.001 "r_mbytes_per_sec": 0, 00:25:34.001 "w_mbytes_per_sec": 0 00:25:34.001 }, 00:25:34.001 "claimed": true, 00:25:34.001 "claim_type": "exclusive_write", 00:25:34.001 "zoned": false, 00:25:34.001 "supported_io_types": { 00:25:34.001 "read": true, 00:25:34.001 "write": true, 00:25:34.001 "unmap": true, 00:25:34.001 "flush": true, 00:25:34.001 "reset": true, 00:25:34.001 "nvme_admin": false, 00:25:34.001 "nvme_io": false, 00:25:34.001 "nvme_io_md": false, 00:25:34.001 "write_zeroes": true, 00:25:34.001 "zcopy": true, 00:25:34.001 "get_zone_info": false, 00:25:34.001 "zone_management": false, 00:25:34.001 "zone_append": false, 00:25:34.001 "compare": false, 00:25:34.001 "compare_and_write": false, 00:25:34.001 "abort": true, 00:25:34.001 "seek_hole": false, 00:25:34.001 "seek_data": false, 00:25:34.001 "copy": true, 00:25:34.001 "nvme_iov_md": false 00:25:34.001 }, 00:25:34.001 "memory_domains": [ 00:25:34.001 { 00:25:34.001 "dma_device_id": "system", 00:25:34.001 "dma_device_type": 1 00:25:34.001 }, 00:25:34.001 { 00:25:34.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.001 "dma_device_type": 2 00:25:34.001 } 00:25:34.001 ], 00:25:34.001 "driver_specific": {} 00:25:34.001 }' 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.001 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.260 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.260 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.260 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:34.260 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:34.260 16:42:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:34.519 "name": "BaseBdev4", 00:25:34.519 "aliases": [ 00:25:34.519 "09a4ed0f-4b46-4ab2-b125-c3b2349d3773" 00:25:34.519 ], 00:25:34.519 "product_name": "Malloc disk", 00:25:34.519 "block_size": 512, 00:25:34.519 "num_blocks": 65536, 00:25:34.519 "uuid": "09a4ed0f-4b46-4ab2-b125-c3b2349d3773", 00:25:34.519 "assigned_rate_limits": { 00:25:34.519 "rw_ios_per_sec": 0, 00:25:34.519 "rw_mbytes_per_sec": 0, 00:25:34.519 "r_mbytes_per_sec": 0, 00:25:34.519 "w_mbytes_per_sec": 0 00:25:34.519 }, 00:25:34.519 "claimed": true, 00:25:34.519 "claim_type": "exclusive_write", 00:25:34.519 "zoned": false, 00:25:34.519 "supported_io_types": { 00:25:34.519 "read": true, 00:25:34.519 "write": true, 00:25:34.519 "unmap": true, 00:25:34.519 "flush": true, 00:25:34.519 "reset": true, 00:25:34.519 "nvme_admin": false, 00:25:34.519 "nvme_io": false, 00:25:34.519 "nvme_io_md": false, 00:25:34.519 "write_zeroes": true, 00:25:34.519 "zcopy": true, 00:25:34.519 "get_zone_info": false, 00:25:34.519 "zone_management": false, 00:25:34.519 "zone_append": false, 00:25:34.519 "compare": false, 00:25:34.519 "compare_and_write": false, 00:25:34.519 "abort": true, 00:25:34.519 "seek_hole": false, 00:25:34.519 "seek_data": false, 00:25:34.519 "copy": true, 00:25:34.519 "nvme_iov_md": false 00:25:34.519 }, 00:25:34.519 "memory_domains": [ 00:25:34.519 { 00:25:34.519 "dma_device_id": "system", 00:25:34.519 "dma_device_type": 1 00:25:34.519 }, 00:25:34.519 { 00:25:34.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:34.519 "dma_device_type": 2 00:25:34.519 } 00:25:34.519 ], 00:25:34.519 "driver_specific": {} 00:25:34.519 }' 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.519 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:34.778 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:34.778 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.778 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:34.778 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:34.778 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:35.036 [2024-07-24 16:42:31.686802] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:35.036 [2024-07-24 16:42:31.686834] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.036 [2024-07-24 16:42:31.686921] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.036 [2024-07-24 16:42:31.687274] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.036 [2024-07-24 16:42:31.687296] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 1718035 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 1718035 ']' 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 1718035 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1718035 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1718035' 00:25:35.037 killing process with pid 1718035 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 1718035 00:25:35.037 [2024-07-24 16:42:31.760973] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:35.037 16:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 1718035 00:25:35.603 [2024-07-24 16:42:32.226338] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:37.508 00:25:37.508 real 0m34.353s 00:25:37.508 user 1m0.271s 00:25:37.508 sys 0m5.871s 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.508 ************************************ 00:25:37.508 END TEST raid_state_function_test 00:25:37.508 ************************************ 00:25:37.508 16:42:33 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:37.508 16:42:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:37.508 16:42:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:37.508 16:42:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:37.508 ************************************ 00:25:37.508 START TEST raid_state_function_test_sb 00:25:37.508 ************************************ 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:37.508 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=1725045 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1725045' 00:25:37.509 Process raid pid: 1725045 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 1725045 /var/tmp/spdk-raid.sock 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1725045 ']' 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:37.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.509 16:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:37.509 [2024-07-24 16:42:34.075668] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:25:37.509 [2024-07-24 16:42:34.075788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:37.509 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:37.509 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:37.509 [2024-07-24 16:42:34.301916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.768 [2024-07-24 16:42:34.582332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.399 [2024-07-24 16:42:34.901716] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:38.399 [2024-07-24 16:42:34.901753] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:38.399 16:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.399 16:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:25:38.399 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:38.658 [2024-07-24 16:42:35.282092] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:38.658 [2024-07-24 16:42:35.282157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:38.658 [2024-07-24 16:42:35.282172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:38.658 [2024-07-24 16:42:35.282189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:38.658 [2024-07-24 16:42:35.282200] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:38.658 [2024-07-24 16:42:35.282216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:38.658 [2024-07-24 16:42:35.282227] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:38.658 [2024-07-24 16:42:35.282243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.658 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.918 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.918 "name": "Existed_Raid", 00:25:38.918 "uuid": "39456139-7590-4812-ac0e-f0912405f65f", 00:25:38.918 "strip_size_kb": 0, 00:25:38.918 "state": "configuring", 00:25:38.918 "raid_level": "raid1", 00:25:38.918 "superblock": true, 00:25:38.918 "num_base_bdevs": 4, 00:25:38.918 "num_base_bdevs_discovered": 0, 00:25:38.918 "num_base_bdevs_operational": 4, 00:25:38.918 "base_bdevs_list": [ 00:25:38.918 { 00:25:38.918 "name": "BaseBdev1", 00:25:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.918 "is_configured": false, 00:25:38.918 "data_offset": 0, 00:25:38.918 "data_size": 0 00:25:38.918 }, 00:25:38.918 { 00:25:38.918 "name": "BaseBdev2", 00:25:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.918 "is_configured": false, 00:25:38.918 "data_offset": 0, 00:25:38.918 "data_size": 0 00:25:38.918 }, 00:25:38.918 { 00:25:38.918 "name": "BaseBdev3", 00:25:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.918 "is_configured": false, 00:25:38.918 "data_offset": 0, 00:25:38.918 "data_size": 0 00:25:38.918 }, 00:25:38.918 { 00:25:38.918 "name": "BaseBdev4", 00:25:38.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.918 "is_configured": false, 00:25:38.918 "data_offset": 0, 00:25:38.918 "data_size": 0 00:25:38.918 } 00:25:38.918 ] 00:25:38.918 }' 00:25:38.918 16:42:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.918 16:42:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:39.485 16:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:39.485 [2024-07-24 16:42:36.288651] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:39.485 [2024-07-24 16:42:36.288692] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:25:39.485 16:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:39.745 [2024-07-24 16:42:36.509317] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:39.745 [2024-07-24 16:42:36.509366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:39.745 [2024-07-24 16:42:36.509379] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:39.745 [2024-07-24 16:42:36.509403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:39.745 [2024-07-24 16:42:36.509414] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:39.745 [2024-07-24 16:42:36.509430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:39.745 [2024-07-24 16:42:36.509441] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:39.745 [2024-07-24 16:42:36.509456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:39.745 16:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:40.006 [2024-07-24 16:42:36.796198] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.006 BaseBdev1 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:40.006 16:42:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.264 16:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:40.523 [ 00:25:40.523 { 00:25:40.523 "name": "BaseBdev1", 00:25:40.523 "aliases": [ 00:25:40.523 "9680bc07-9137-47f0-92ed-b39b36052c04" 00:25:40.523 ], 00:25:40.523 "product_name": "Malloc disk", 00:25:40.523 "block_size": 512, 00:25:40.523 "num_blocks": 65536, 00:25:40.523 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:40.523 "assigned_rate_limits": { 00:25:40.523 "rw_ios_per_sec": 0, 00:25:40.523 "rw_mbytes_per_sec": 0, 00:25:40.523 "r_mbytes_per_sec": 0, 00:25:40.523 "w_mbytes_per_sec": 0 00:25:40.523 }, 00:25:40.523 "claimed": true, 00:25:40.523 "claim_type": "exclusive_write", 00:25:40.523 "zoned": false, 00:25:40.523 "supported_io_types": { 00:25:40.523 "read": true, 00:25:40.523 "write": true, 00:25:40.523 "unmap": true, 00:25:40.523 "flush": true, 00:25:40.523 "reset": true, 00:25:40.523 "nvme_admin": false, 00:25:40.523 "nvme_io": false, 00:25:40.523 "nvme_io_md": false, 00:25:40.523 "write_zeroes": true, 00:25:40.523 "zcopy": true, 00:25:40.523 "get_zone_info": false, 00:25:40.523 "zone_management": false, 00:25:40.523 "zone_append": false, 00:25:40.523 "compare": false, 00:25:40.523 "compare_and_write": false, 00:25:40.523 "abort": true, 00:25:40.523 "seek_hole": false, 00:25:40.523 "seek_data": false, 00:25:40.523 "copy": true, 00:25:40.523 "nvme_iov_md": false 00:25:40.523 }, 00:25:40.523 "memory_domains": [ 00:25:40.523 { 00:25:40.523 "dma_device_id": "system", 00:25:40.523 "dma_device_type": 1 00:25:40.523 }, 00:25:40.523 { 00:25:40.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.523 "dma_device_type": 2 00:25:40.523 } 00:25:40.523 ], 00:25:40.523 "driver_specific": {} 00:25:40.523 } 00:25:40.523 ] 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.523 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.782 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.782 "name": "Existed_Raid", 00:25:40.782 "uuid": "0f4dc43c-682e-461f-97b6-65d677ed810b", 00:25:40.782 "strip_size_kb": 0, 00:25:40.782 "state": "configuring", 00:25:40.782 "raid_level": "raid1", 00:25:40.782 "superblock": true, 00:25:40.782 "num_base_bdevs": 4, 00:25:40.782 "num_base_bdevs_discovered": 1, 00:25:40.782 "num_base_bdevs_operational": 4, 00:25:40.782 "base_bdevs_list": [ 00:25:40.782 { 00:25:40.782 "name": "BaseBdev1", 00:25:40.782 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:40.782 "is_configured": true, 00:25:40.782 "data_offset": 2048, 00:25:40.782 "data_size": 63488 00:25:40.782 }, 00:25:40.782 { 00:25:40.782 "name": "BaseBdev2", 00:25:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.782 "is_configured": false, 00:25:40.782 "data_offset": 0, 00:25:40.782 "data_size": 0 00:25:40.782 }, 00:25:40.782 { 00:25:40.782 "name": "BaseBdev3", 00:25:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.782 "is_configured": false, 00:25:40.782 "data_offset": 0, 00:25:40.782 "data_size": 0 00:25:40.782 }, 00:25:40.782 { 00:25:40.782 "name": "BaseBdev4", 00:25:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.782 "is_configured": false, 00:25:40.782 "data_offset": 0, 00:25:40.782 "data_size": 0 00:25:40.782 } 00:25:40.782 ] 00:25:40.782 }' 00:25:40.782 16:42:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.782 16:42:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:41.350 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:41.609 [2024-07-24 16:42:38.280240] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:41.609 [2024-07-24 16:42:38.280296] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:25:41.609 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:41.868 [2024-07-24 16:42:38.508958] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:41.868 [2024-07-24 16:42:38.511253] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:41.868 [2024-07-24 16:42:38.511297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:41.868 [2024-07-24 16:42:38.511311] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:41.868 [2024-07-24 16:42:38.511332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:41.868 [2024-07-24 16:42:38.511344] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:41.868 [2024-07-24 16:42:38.511362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.868 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.127 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.127 "name": "Existed_Raid", 00:25:42.128 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:42.128 "strip_size_kb": 0, 00:25:42.128 "state": "configuring", 00:25:42.128 "raid_level": "raid1", 00:25:42.128 "superblock": true, 00:25:42.128 "num_base_bdevs": 4, 00:25:42.128 "num_base_bdevs_discovered": 1, 00:25:42.128 "num_base_bdevs_operational": 4, 00:25:42.128 "base_bdevs_list": [ 00:25:42.128 { 00:25:42.128 "name": "BaseBdev1", 00:25:42.128 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:42.128 "is_configured": true, 00:25:42.128 "data_offset": 2048, 00:25:42.128 "data_size": 63488 00:25:42.128 }, 00:25:42.128 { 00:25:42.128 "name": "BaseBdev2", 00:25:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.128 "is_configured": false, 00:25:42.128 "data_offset": 0, 00:25:42.128 "data_size": 0 00:25:42.128 }, 00:25:42.128 { 00:25:42.128 "name": "BaseBdev3", 00:25:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.128 "is_configured": false, 00:25:42.128 "data_offset": 0, 00:25:42.128 "data_size": 0 00:25:42.128 }, 00:25:42.128 { 00:25:42.128 "name": "BaseBdev4", 00:25:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.128 "is_configured": false, 00:25:42.128 "data_offset": 0, 00:25:42.128 "data_size": 0 00:25:42.128 } 00:25:42.128 ] 00:25:42.128 }' 00:25:42.128 16:42:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.128 16:42:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:42.696 [2024-07-24 16:42:39.538386] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:42.696 BaseBdev2 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:42.696 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:42.956 16:42:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:43.215 [ 00:25:43.215 { 00:25:43.215 "name": "BaseBdev2", 00:25:43.215 "aliases": [ 00:25:43.215 "3232ce29-6673-4fa9-99e3-731ccbf525f1" 00:25:43.215 ], 00:25:43.215 "product_name": "Malloc disk", 00:25:43.215 "block_size": 512, 00:25:43.215 "num_blocks": 65536, 00:25:43.215 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:43.215 "assigned_rate_limits": { 00:25:43.215 "rw_ios_per_sec": 0, 00:25:43.215 "rw_mbytes_per_sec": 0, 00:25:43.215 "r_mbytes_per_sec": 0, 00:25:43.215 "w_mbytes_per_sec": 0 00:25:43.215 }, 00:25:43.215 "claimed": true, 00:25:43.215 "claim_type": "exclusive_write", 00:25:43.215 "zoned": false, 00:25:43.215 "supported_io_types": { 00:25:43.215 "read": true, 00:25:43.215 "write": true, 00:25:43.215 "unmap": true, 00:25:43.215 "flush": true, 00:25:43.215 "reset": true, 00:25:43.215 "nvme_admin": false, 00:25:43.215 "nvme_io": false, 00:25:43.215 "nvme_io_md": false, 00:25:43.215 "write_zeroes": true, 00:25:43.215 "zcopy": true, 00:25:43.215 "get_zone_info": false, 00:25:43.215 "zone_management": false, 00:25:43.215 "zone_append": false, 00:25:43.215 "compare": false, 00:25:43.215 "compare_and_write": false, 00:25:43.215 "abort": true, 00:25:43.215 "seek_hole": false, 00:25:43.215 "seek_data": false, 00:25:43.215 "copy": true, 00:25:43.215 "nvme_iov_md": false 00:25:43.215 }, 00:25:43.215 "memory_domains": [ 00:25:43.215 { 00:25:43.215 "dma_device_id": "system", 00:25:43.215 "dma_device_type": 1 00:25:43.215 }, 00:25:43.215 { 00:25:43.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.215 "dma_device_type": 2 00:25:43.215 } 00:25:43.215 ], 00:25:43.215 "driver_specific": {} 00:25:43.215 } 00:25:43.215 ] 00:25:43.215 16:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:43.215 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:43.215 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.216 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.475 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.475 "name": "Existed_Raid", 00:25:43.475 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:43.475 "strip_size_kb": 0, 00:25:43.475 "state": "configuring", 00:25:43.475 "raid_level": "raid1", 00:25:43.475 "superblock": true, 00:25:43.475 "num_base_bdevs": 4, 00:25:43.475 "num_base_bdevs_discovered": 2, 00:25:43.475 "num_base_bdevs_operational": 4, 00:25:43.476 "base_bdevs_list": [ 00:25:43.476 { 00:25:43.476 "name": "BaseBdev1", 00:25:43.476 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:43.476 "is_configured": true, 00:25:43.476 "data_offset": 2048, 00:25:43.476 "data_size": 63488 00:25:43.476 }, 00:25:43.476 { 00:25:43.476 "name": "BaseBdev2", 00:25:43.476 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:43.476 "is_configured": true, 00:25:43.476 "data_offset": 2048, 00:25:43.476 "data_size": 63488 00:25:43.476 }, 00:25:43.476 { 00:25:43.476 "name": "BaseBdev3", 00:25:43.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.476 "is_configured": false, 00:25:43.476 "data_offset": 0, 00:25:43.476 "data_size": 0 00:25:43.476 }, 00:25:43.476 { 00:25:43.476 "name": "BaseBdev4", 00:25:43.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.476 "is_configured": false, 00:25:43.476 "data_offset": 0, 00:25:43.476 "data_size": 0 00:25:43.476 } 00:25:43.476 ] 00:25:43.476 }' 00:25:43.476 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.476 16:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:44.044 16:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:44.302 [2024-07-24 16:42:41.106150] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:44.302 BaseBdev3 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:44.302 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:44.561 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:44.821 [ 00:25:44.821 { 00:25:44.821 "name": "BaseBdev3", 00:25:44.821 "aliases": [ 00:25:44.821 "8fa8efe2-388a-407b-b810-2ccbc0444944" 00:25:44.821 ], 00:25:44.821 "product_name": "Malloc disk", 00:25:44.821 "block_size": 512, 00:25:44.821 "num_blocks": 65536, 00:25:44.821 "uuid": "8fa8efe2-388a-407b-b810-2ccbc0444944", 00:25:44.821 "assigned_rate_limits": { 00:25:44.821 "rw_ios_per_sec": 0, 00:25:44.821 "rw_mbytes_per_sec": 0, 00:25:44.821 "r_mbytes_per_sec": 0, 00:25:44.821 "w_mbytes_per_sec": 0 00:25:44.821 }, 00:25:44.821 "claimed": true, 00:25:44.821 "claim_type": "exclusive_write", 00:25:44.821 "zoned": false, 00:25:44.821 "supported_io_types": { 00:25:44.821 "read": true, 00:25:44.821 "write": true, 00:25:44.821 "unmap": true, 00:25:44.821 "flush": true, 00:25:44.821 "reset": true, 00:25:44.821 "nvme_admin": false, 00:25:44.821 "nvme_io": false, 00:25:44.821 "nvme_io_md": false, 00:25:44.821 "write_zeroes": true, 00:25:44.821 "zcopy": true, 00:25:44.821 "get_zone_info": false, 00:25:44.821 "zone_management": false, 00:25:44.821 "zone_append": false, 00:25:44.821 "compare": false, 00:25:44.821 "compare_and_write": false, 00:25:44.821 "abort": true, 00:25:44.821 "seek_hole": false, 00:25:44.821 "seek_data": false, 00:25:44.821 "copy": true, 00:25:44.821 "nvme_iov_md": false 00:25:44.821 }, 00:25:44.821 "memory_domains": [ 00:25:44.821 { 00:25:44.821 "dma_device_id": "system", 00:25:44.821 "dma_device_type": 1 00:25:44.821 }, 00:25:44.821 { 00:25:44.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.821 "dma_device_type": 2 00:25:44.821 } 00:25:44.821 ], 00:25:44.821 "driver_specific": {} 00:25:44.821 } 00:25:44.821 ] 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.821 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.081 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.081 "name": "Existed_Raid", 00:25:45.081 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:45.081 "strip_size_kb": 0, 00:25:45.081 "state": "configuring", 00:25:45.081 "raid_level": "raid1", 00:25:45.081 "superblock": true, 00:25:45.081 "num_base_bdevs": 4, 00:25:45.081 "num_base_bdevs_discovered": 3, 00:25:45.081 "num_base_bdevs_operational": 4, 00:25:45.081 "base_bdevs_list": [ 00:25:45.081 { 00:25:45.081 "name": "BaseBdev1", 00:25:45.081 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:45.081 "is_configured": true, 00:25:45.081 "data_offset": 2048, 00:25:45.081 "data_size": 63488 00:25:45.081 }, 00:25:45.081 { 00:25:45.081 "name": "BaseBdev2", 00:25:45.081 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:45.081 "is_configured": true, 00:25:45.081 "data_offset": 2048, 00:25:45.081 "data_size": 63488 00:25:45.081 }, 00:25:45.081 { 00:25:45.081 "name": "BaseBdev3", 00:25:45.081 "uuid": "8fa8efe2-388a-407b-b810-2ccbc0444944", 00:25:45.081 "is_configured": true, 00:25:45.081 "data_offset": 2048, 00:25:45.081 "data_size": 63488 00:25:45.081 }, 00:25:45.081 { 00:25:45.081 "name": "BaseBdev4", 00:25:45.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.081 "is_configured": false, 00:25:45.081 "data_offset": 0, 00:25:45.081 "data_size": 0 00:25:45.081 } 00:25:45.081 ] 00:25:45.081 }' 00:25:45.081 16:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.081 16:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:45.649 16:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:45.909 [2024-07-24 16:42:42.613534] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:45.909 [2024-07-24 16:42:42.613803] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:25:45.909 [2024-07-24 16:42:42.613827] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:45.909 [2024-07-24 16:42:42.614158] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:45.909 [2024-07-24 16:42:42.614387] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:25:45.909 [2024-07-24 16:42:42.614406] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:25:45.909 [2024-07-24 16:42:42.614587] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.909 BaseBdev4 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:45.909 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:46.168 16:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:46.427 [ 00:25:46.427 { 00:25:46.427 "name": "BaseBdev4", 00:25:46.427 "aliases": [ 00:25:46.427 "6640f777-dd9e-4951-b09f-f9318bb00e57" 00:25:46.427 ], 00:25:46.427 "product_name": "Malloc disk", 00:25:46.427 "block_size": 512, 00:25:46.427 "num_blocks": 65536, 00:25:46.427 "uuid": "6640f777-dd9e-4951-b09f-f9318bb00e57", 00:25:46.427 "assigned_rate_limits": { 00:25:46.427 "rw_ios_per_sec": 0, 00:25:46.427 "rw_mbytes_per_sec": 0, 00:25:46.427 "r_mbytes_per_sec": 0, 00:25:46.427 "w_mbytes_per_sec": 0 00:25:46.427 }, 00:25:46.427 "claimed": true, 00:25:46.427 "claim_type": "exclusive_write", 00:25:46.427 "zoned": false, 00:25:46.427 "supported_io_types": { 00:25:46.427 "read": true, 00:25:46.427 "write": true, 00:25:46.427 "unmap": true, 00:25:46.427 "flush": true, 00:25:46.427 "reset": true, 00:25:46.427 "nvme_admin": false, 00:25:46.427 "nvme_io": false, 00:25:46.427 "nvme_io_md": false, 00:25:46.427 "write_zeroes": true, 00:25:46.427 "zcopy": true, 00:25:46.427 "get_zone_info": false, 00:25:46.427 "zone_management": false, 00:25:46.427 "zone_append": false, 00:25:46.427 "compare": false, 00:25:46.427 "compare_and_write": false, 00:25:46.427 "abort": true, 00:25:46.427 "seek_hole": false, 00:25:46.427 "seek_data": false, 00:25:46.427 "copy": true, 00:25:46.427 "nvme_iov_md": false 00:25:46.427 }, 00:25:46.427 "memory_domains": [ 00:25:46.427 { 00:25:46.427 "dma_device_id": "system", 00:25:46.427 "dma_device_type": 1 00:25:46.427 }, 00:25:46.427 { 00:25:46.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.427 "dma_device_type": 2 00:25:46.427 } 00:25:46.427 ], 00:25:46.427 "driver_specific": {} 00:25:46.427 } 00:25:46.427 ] 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.427 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.687 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.687 "name": "Existed_Raid", 00:25:46.687 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:46.687 "strip_size_kb": 0, 00:25:46.687 "state": "online", 00:25:46.687 "raid_level": "raid1", 00:25:46.687 "superblock": true, 00:25:46.687 "num_base_bdevs": 4, 00:25:46.687 "num_base_bdevs_discovered": 4, 00:25:46.687 "num_base_bdevs_operational": 4, 00:25:46.687 "base_bdevs_list": [ 00:25:46.687 { 00:25:46.687 "name": "BaseBdev1", 00:25:46.687 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:46.687 "is_configured": true, 00:25:46.687 "data_offset": 2048, 00:25:46.687 "data_size": 63488 00:25:46.687 }, 00:25:46.687 { 00:25:46.687 "name": "BaseBdev2", 00:25:46.687 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:46.687 "is_configured": true, 00:25:46.687 "data_offset": 2048, 00:25:46.687 "data_size": 63488 00:25:46.687 }, 00:25:46.687 { 00:25:46.687 "name": "BaseBdev3", 00:25:46.687 "uuid": "8fa8efe2-388a-407b-b810-2ccbc0444944", 00:25:46.687 "is_configured": true, 00:25:46.687 "data_offset": 2048, 00:25:46.687 "data_size": 63488 00:25:46.687 }, 00:25:46.687 { 00:25:46.687 "name": "BaseBdev4", 00:25:46.687 "uuid": "6640f777-dd9e-4951-b09f-f9318bb00e57", 00:25:46.687 "is_configured": true, 00:25:46.687 "data_offset": 2048, 00:25:46.687 "data_size": 63488 00:25:46.687 } 00:25:46.687 ] 00:25:46.687 }' 00:25:46.687 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.687 16:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:47.255 16:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:47.255 [2024-07-24 16:42:44.110133] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:47.515 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:47.515 "name": "Existed_Raid", 00:25:47.515 "aliases": [ 00:25:47.515 "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f" 00:25:47.515 ], 00:25:47.515 "product_name": "Raid Volume", 00:25:47.515 "block_size": 512, 00:25:47.515 "num_blocks": 63488, 00:25:47.515 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:47.515 "assigned_rate_limits": { 00:25:47.515 "rw_ios_per_sec": 0, 00:25:47.515 "rw_mbytes_per_sec": 0, 00:25:47.515 "r_mbytes_per_sec": 0, 00:25:47.515 "w_mbytes_per_sec": 0 00:25:47.515 }, 00:25:47.515 "claimed": false, 00:25:47.515 "zoned": false, 00:25:47.515 "supported_io_types": { 00:25:47.515 "read": true, 00:25:47.515 "write": true, 00:25:47.515 "unmap": false, 00:25:47.515 "flush": false, 00:25:47.515 "reset": true, 00:25:47.515 "nvme_admin": false, 00:25:47.515 "nvme_io": false, 00:25:47.515 "nvme_io_md": false, 00:25:47.515 "write_zeroes": true, 00:25:47.515 "zcopy": false, 00:25:47.515 "get_zone_info": false, 00:25:47.515 "zone_management": false, 00:25:47.515 "zone_append": false, 00:25:47.515 "compare": false, 00:25:47.515 "compare_and_write": false, 00:25:47.515 "abort": false, 00:25:47.515 "seek_hole": false, 00:25:47.515 "seek_data": false, 00:25:47.515 "copy": false, 00:25:47.515 "nvme_iov_md": false 00:25:47.515 }, 00:25:47.515 "memory_domains": [ 00:25:47.515 { 00:25:47.515 "dma_device_id": "system", 00:25:47.515 "dma_device_type": 1 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.515 "dma_device_type": 2 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "system", 00:25:47.515 "dma_device_type": 1 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.515 "dma_device_type": 2 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "system", 00:25:47.515 "dma_device_type": 1 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.515 "dma_device_type": 2 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "system", 00:25:47.515 "dma_device_type": 1 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.515 "dma_device_type": 2 00:25:47.515 } 00:25:47.515 ], 00:25:47.515 "driver_specific": { 00:25:47.515 "raid": { 00:25:47.515 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:47.515 "strip_size_kb": 0, 00:25:47.515 "state": "online", 00:25:47.515 "raid_level": "raid1", 00:25:47.515 "superblock": true, 00:25:47.515 "num_base_bdevs": 4, 00:25:47.515 "num_base_bdevs_discovered": 4, 00:25:47.515 "num_base_bdevs_operational": 4, 00:25:47.515 "base_bdevs_list": [ 00:25:47.515 { 00:25:47.515 "name": "BaseBdev1", 00:25:47.515 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:47.515 "is_configured": true, 00:25:47.515 "data_offset": 2048, 00:25:47.515 "data_size": 63488 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "name": "BaseBdev2", 00:25:47.515 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:47.515 "is_configured": true, 00:25:47.515 "data_offset": 2048, 00:25:47.515 "data_size": 63488 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "name": "BaseBdev3", 00:25:47.515 "uuid": "8fa8efe2-388a-407b-b810-2ccbc0444944", 00:25:47.515 "is_configured": true, 00:25:47.515 "data_offset": 2048, 00:25:47.515 "data_size": 63488 00:25:47.515 }, 00:25:47.515 { 00:25:47.515 "name": "BaseBdev4", 00:25:47.515 "uuid": "6640f777-dd9e-4951-b09f-f9318bb00e57", 00:25:47.515 "is_configured": true, 00:25:47.515 "data_offset": 2048, 00:25:47.515 "data_size": 63488 00:25:47.515 } 00:25:47.515 ] 00:25:47.515 } 00:25:47.515 } 00:25:47.515 }' 00:25:47.515 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:47.515 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:47.515 BaseBdev2 00:25:47.515 BaseBdev3 00:25:47.515 BaseBdev4' 00:25:47.515 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:47.515 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:47.515 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:47.774 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:47.774 "name": "BaseBdev1", 00:25:47.774 "aliases": [ 00:25:47.774 "9680bc07-9137-47f0-92ed-b39b36052c04" 00:25:47.774 ], 00:25:47.774 "product_name": "Malloc disk", 00:25:47.774 "block_size": 512, 00:25:47.774 "num_blocks": 65536, 00:25:47.774 "uuid": "9680bc07-9137-47f0-92ed-b39b36052c04", 00:25:47.774 "assigned_rate_limits": { 00:25:47.774 "rw_ios_per_sec": 0, 00:25:47.774 "rw_mbytes_per_sec": 0, 00:25:47.774 "r_mbytes_per_sec": 0, 00:25:47.774 "w_mbytes_per_sec": 0 00:25:47.774 }, 00:25:47.774 "claimed": true, 00:25:47.774 "claim_type": "exclusive_write", 00:25:47.774 "zoned": false, 00:25:47.774 "supported_io_types": { 00:25:47.774 "read": true, 00:25:47.774 "write": true, 00:25:47.774 "unmap": true, 00:25:47.774 "flush": true, 00:25:47.774 "reset": true, 00:25:47.774 "nvme_admin": false, 00:25:47.774 "nvme_io": false, 00:25:47.774 "nvme_io_md": false, 00:25:47.774 "write_zeroes": true, 00:25:47.774 "zcopy": true, 00:25:47.774 "get_zone_info": false, 00:25:47.775 "zone_management": false, 00:25:47.775 "zone_append": false, 00:25:47.775 "compare": false, 00:25:47.775 "compare_and_write": false, 00:25:47.775 "abort": true, 00:25:47.775 "seek_hole": false, 00:25:47.775 "seek_data": false, 00:25:47.775 "copy": true, 00:25:47.775 "nvme_iov_md": false 00:25:47.775 }, 00:25:47.775 "memory_domains": [ 00:25:47.775 { 00:25:47.775 "dma_device_id": "system", 00:25:47.775 "dma_device_type": 1 00:25:47.775 }, 00:25:47.775 { 00:25:47.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.775 "dma_device_type": 2 00:25:47.775 } 00:25:47.775 ], 00:25:47.775 "driver_specific": {} 00:25:47.775 }' 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:47.775 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:48.034 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:48.293 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:48.293 "name": "BaseBdev2", 00:25:48.293 "aliases": [ 00:25:48.293 "3232ce29-6673-4fa9-99e3-731ccbf525f1" 00:25:48.293 ], 00:25:48.293 "product_name": "Malloc disk", 00:25:48.293 "block_size": 512, 00:25:48.293 "num_blocks": 65536, 00:25:48.293 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:48.293 "assigned_rate_limits": { 00:25:48.293 "rw_ios_per_sec": 0, 00:25:48.293 "rw_mbytes_per_sec": 0, 00:25:48.293 "r_mbytes_per_sec": 0, 00:25:48.293 "w_mbytes_per_sec": 0 00:25:48.293 }, 00:25:48.293 "claimed": true, 00:25:48.293 "claim_type": "exclusive_write", 00:25:48.293 "zoned": false, 00:25:48.293 "supported_io_types": { 00:25:48.293 "read": true, 00:25:48.293 "write": true, 00:25:48.293 "unmap": true, 00:25:48.293 "flush": true, 00:25:48.293 "reset": true, 00:25:48.293 "nvme_admin": false, 00:25:48.293 "nvme_io": false, 00:25:48.293 "nvme_io_md": false, 00:25:48.293 "write_zeroes": true, 00:25:48.293 "zcopy": true, 00:25:48.293 "get_zone_info": false, 00:25:48.293 "zone_management": false, 00:25:48.293 "zone_append": false, 00:25:48.293 "compare": false, 00:25:48.293 "compare_and_write": false, 00:25:48.293 "abort": true, 00:25:48.293 "seek_hole": false, 00:25:48.293 "seek_data": false, 00:25:48.293 "copy": true, 00:25:48.293 "nvme_iov_md": false 00:25:48.293 }, 00:25:48.293 "memory_domains": [ 00:25:48.293 { 00:25:48.293 "dma_device_id": "system", 00:25:48.293 "dma_device_type": 1 00:25:48.293 }, 00:25:48.293 { 00:25:48.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.293 "dma_device_type": 2 00:25:48.293 } 00:25:48.293 ], 00:25:48.293 "driver_specific": {} 00:25:48.293 }' 00:25:48.293 16:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.293 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.293 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:48.293 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:48.293 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:48.293 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:48.552 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:48.553 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:48.553 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:48.812 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:48.812 "name": "BaseBdev3", 00:25:48.812 "aliases": [ 00:25:48.812 "8fa8efe2-388a-407b-b810-2ccbc0444944" 00:25:48.812 ], 00:25:48.812 "product_name": "Malloc disk", 00:25:48.812 "block_size": 512, 00:25:48.812 "num_blocks": 65536, 00:25:48.812 "uuid": "8fa8efe2-388a-407b-b810-2ccbc0444944", 00:25:48.812 "assigned_rate_limits": { 00:25:48.812 "rw_ios_per_sec": 0, 00:25:48.812 "rw_mbytes_per_sec": 0, 00:25:48.812 "r_mbytes_per_sec": 0, 00:25:48.812 "w_mbytes_per_sec": 0 00:25:48.812 }, 00:25:48.812 "claimed": true, 00:25:48.812 "claim_type": "exclusive_write", 00:25:48.812 "zoned": false, 00:25:48.812 "supported_io_types": { 00:25:48.812 "read": true, 00:25:48.812 "write": true, 00:25:48.812 "unmap": true, 00:25:48.812 "flush": true, 00:25:48.812 "reset": true, 00:25:48.812 "nvme_admin": false, 00:25:48.812 "nvme_io": false, 00:25:48.812 "nvme_io_md": false, 00:25:48.812 "write_zeroes": true, 00:25:48.812 "zcopy": true, 00:25:48.812 "get_zone_info": false, 00:25:48.812 "zone_management": false, 00:25:48.812 "zone_append": false, 00:25:48.812 "compare": false, 00:25:48.812 "compare_and_write": false, 00:25:48.812 "abort": true, 00:25:48.812 "seek_hole": false, 00:25:48.812 "seek_data": false, 00:25:48.812 "copy": true, 00:25:48.812 "nvme_iov_md": false 00:25:48.812 }, 00:25:48.812 "memory_domains": [ 00:25:48.812 { 00:25:48.812 "dma_device_id": "system", 00:25:48.812 "dma_device_type": 1 00:25:48.812 }, 00:25:48.812 { 00:25:48.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.812 "dma_device_type": 2 00:25:48.812 } 00:25:48.812 ], 00:25:48.812 "driver_specific": {} 00:25:48.812 }' 00:25:48.812 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.812 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:48.812 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:48.812 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:49.071 16:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.330 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.330 "name": "BaseBdev4", 00:25:49.330 "aliases": [ 00:25:49.330 "6640f777-dd9e-4951-b09f-f9318bb00e57" 00:25:49.330 ], 00:25:49.330 "product_name": "Malloc disk", 00:25:49.330 "block_size": 512, 00:25:49.330 "num_blocks": 65536, 00:25:49.330 "uuid": "6640f777-dd9e-4951-b09f-f9318bb00e57", 00:25:49.330 "assigned_rate_limits": { 00:25:49.330 "rw_ios_per_sec": 0, 00:25:49.330 "rw_mbytes_per_sec": 0, 00:25:49.330 "r_mbytes_per_sec": 0, 00:25:49.330 "w_mbytes_per_sec": 0 00:25:49.330 }, 00:25:49.330 "claimed": true, 00:25:49.330 "claim_type": "exclusive_write", 00:25:49.330 "zoned": false, 00:25:49.330 "supported_io_types": { 00:25:49.330 "read": true, 00:25:49.330 "write": true, 00:25:49.330 "unmap": true, 00:25:49.330 "flush": true, 00:25:49.330 "reset": true, 00:25:49.330 "nvme_admin": false, 00:25:49.330 "nvme_io": false, 00:25:49.330 "nvme_io_md": false, 00:25:49.330 "write_zeroes": true, 00:25:49.330 "zcopy": true, 00:25:49.330 "get_zone_info": false, 00:25:49.330 "zone_management": false, 00:25:49.330 "zone_append": false, 00:25:49.330 "compare": false, 00:25:49.330 "compare_and_write": false, 00:25:49.330 "abort": true, 00:25:49.330 "seek_hole": false, 00:25:49.330 "seek_data": false, 00:25:49.330 "copy": true, 00:25:49.330 "nvme_iov_md": false 00:25:49.330 }, 00:25:49.330 "memory_domains": [ 00:25:49.330 { 00:25:49.330 "dma_device_id": "system", 00:25:49.330 "dma_device_type": 1 00:25:49.330 }, 00:25:49.330 { 00:25:49.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.330 "dma_device_type": 2 00:25:49.330 } 00:25:49.330 ], 00:25:49.330 "driver_specific": {} 00:25:49.330 }' 00:25:49.330 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.330 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.589 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.849 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.849 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:49.849 [2024-07-24 16:42:46.684759] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.108 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.368 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.368 "name": "Existed_Raid", 00:25:50.368 "uuid": "806b6f71-d1b9-4d75-9b2a-8f57b7dbf13f", 00:25:50.368 "strip_size_kb": 0, 00:25:50.368 "state": "online", 00:25:50.368 "raid_level": "raid1", 00:25:50.368 "superblock": true, 00:25:50.368 "num_base_bdevs": 4, 00:25:50.368 "num_base_bdevs_discovered": 3, 00:25:50.368 "num_base_bdevs_operational": 3, 00:25:50.368 "base_bdevs_list": [ 00:25:50.368 { 00:25:50.368 "name": null, 00:25:50.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.368 "is_configured": false, 00:25:50.368 "data_offset": 2048, 00:25:50.368 "data_size": 63488 00:25:50.368 }, 00:25:50.368 { 00:25:50.368 "name": "BaseBdev2", 00:25:50.368 "uuid": "3232ce29-6673-4fa9-99e3-731ccbf525f1", 00:25:50.368 "is_configured": true, 00:25:50.368 "data_offset": 2048, 00:25:50.368 "data_size": 63488 00:25:50.368 }, 00:25:50.368 { 00:25:50.368 "name": "BaseBdev3", 00:25:50.368 "uuid": "8fa8efe2-388a-407b-b810-2ccbc0444944", 00:25:50.368 "is_configured": true, 00:25:50.368 "data_offset": 2048, 00:25:50.368 "data_size": 63488 00:25:50.368 }, 00:25:50.368 { 00:25:50.368 "name": "BaseBdev4", 00:25:50.368 "uuid": "6640f777-dd9e-4951-b09f-f9318bb00e57", 00:25:50.368 "is_configured": true, 00:25:50.368 "data_offset": 2048, 00:25:50.368 "data_size": 63488 00:25:50.368 } 00:25:50.368 ] 00:25:50.368 }' 00:25:50.368 16:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.368 16:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:50.937 16:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:51.562 [2024-07-24 16:42:48.252485] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:51.562 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:51.562 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:51.562 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.562 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:51.821 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:51.821 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:51.821 16:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:52.080 [2024-07-24 16:42:48.845009] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:52.340 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:52.340 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:52.340 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.340 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:52.599 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:52.599 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:52.599 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:52.859 [2024-07-24 16:42:49.713180] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:52.859 [2024-07-24 16:42:49.713293] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:53.118 [2024-07-24 16:42:49.844053] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:53.118 [2024-07-24 16:42:49.844108] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:53.118 [2024-07-24 16:42:49.844127] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:25:53.118 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:53.118 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:53.118 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.118 16:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:53.377 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:53.377 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:53.377 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:53.377 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:53.377 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:53.377 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:53.635 BaseBdev2 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:53.635 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:54.204 16:42:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:54.463 [ 00:25:54.463 { 00:25:54.463 "name": "BaseBdev2", 00:25:54.463 "aliases": [ 00:25:54.463 "ca75682e-b602-4a80-94b4-1069f54c50aa" 00:25:54.463 ], 00:25:54.463 "product_name": "Malloc disk", 00:25:54.463 "block_size": 512, 00:25:54.463 "num_blocks": 65536, 00:25:54.463 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:25:54.463 "assigned_rate_limits": { 00:25:54.463 "rw_ios_per_sec": 0, 00:25:54.463 "rw_mbytes_per_sec": 0, 00:25:54.463 "r_mbytes_per_sec": 0, 00:25:54.463 "w_mbytes_per_sec": 0 00:25:54.463 }, 00:25:54.463 "claimed": false, 00:25:54.463 "zoned": false, 00:25:54.463 "supported_io_types": { 00:25:54.463 "read": true, 00:25:54.463 "write": true, 00:25:54.463 "unmap": true, 00:25:54.463 "flush": true, 00:25:54.463 "reset": true, 00:25:54.463 "nvme_admin": false, 00:25:54.463 "nvme_io": false, 00:25:54.463 "nvme_io_md": false, 00:25:54.463 "write_zeroes": true, 00:25:54.463 "zcopy": true, 00:25:54.463 "get_zone_info": false, 00:25:54.463 "zone_management": false, 00:25:54.463 "zone_append": false, 00:25:54.463 "compare": false, 00:25:54.463 "compare_and_write": false, 00:25:54.463 "abort": true, 00:25:54.463 "seek_hole": false, 00:25:54.463 "seek_data": false, 00:25:54.463 "copy": true, 00:25:54.463 "nvme_iov_md": false 00:25:54.463 }, 00:25:54.463 "memory_domains": [ 00:25:54.463 { 00:25:54.463 "dma_device_id": "system", 00:25:54.463 "dma_device_type": 1 00:25:54.463 }, 00:25:54.463 { 00:25:54.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:54.463 "dma_device_type": 2 00:25:54.463 } 00:25:54.463 ], 00:25:54.463 "driver_specific": {} 00:25:54.463 } 00:25:54.463 ] 00:25:54.463 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:54.463 16:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:54.463 16:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:54.463 16:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:55.031 BaseBdev3 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:55.031 16:42:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:55.600 [ 00:25:55.600 { 00:25:55.600 "name": "BaseBdev3", 00:25:55.600 "aliases": [ 00:25:55.600 "8bfa2a84-5e28-489a-b40f-17a9c5e1a834" 00:25:55.600 ], 00:25:55.600 "product_name": "Malloc disk", 00:25:55.600 "block_size": 512, 00:25:55.600 "num_blocks": 65536, 00:25:55.600 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:25:55.600 "assigned_rate_limits": { 00:25:55.600 "rw_ios_per_sec": 0, 00:25:55.600 "rw_mbytes_per_sec": 0, 00:25:55.600 "r_mbytes_per_sec": 0, 00:25:55.600 "w_mbytes_per_sec": 0 00:25:55.600 }, 00:25:55.600 "claimed": false, 00:25:55.600 "zoned": false, 00:25:55.600 "supported_io_types": { 00:25:55.600 "read": true, 00:25:55.600 "write": true, 00:25:55.600 "unmap": true, 00:25:55.600 "flush": true, 00:25:55.600 "reset": true, 00:25:55.600 "nvme_admin": false, 00:25:55.600 "nvme_io": false, 00:25:55.600 "nvme_io_md": false, 00:25:55.600 "write_zeroes": true, 00:25:55.600 "zcopy": true, 00:25:55.600 "get_zone_info": false, 00:25:55.600 "zone_management": false, 00:25:55.600 "zone_append": false, 00:25:55.600 "compare": false, 00:25:55.600 "compare_and_write": false, 00:25:55.600 "abort": true, 00:25:55.600 "seek_hole": false, 00:25:55.600 "seek_data": false, 00:25:55.600 "copy": true, 00:25:55.600 "nvme_iov_md": false 00:25:55.600 }, 00:25:55.600 "memory_domains": [ 00:25:55.600 { 00:25:55.600 "dma_device_id": "system", 00:25:55.601 "dma_device_type": 1 00:25:55.601 }, 00:25:55.601 { 00:25:55.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:55.601 "dma_device_type": 2 00:25:55.601 } 00:25:55.601 ], 00:25:55.601 "driver_specific": {} 00:25:55.601 } 00:25:55.601 ] 00:25:55.601 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:55.601 16:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:55.601 16:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:55.601 16:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:55.860 BaseBdev4 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:55.860 16:42:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:56.428 16:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:56.686 [ 00:25:56.686 { 00:25:56.686 "name": "BaseBdev4", 00:25:56.686 "aliases": [ 00:25:56.686 "8773adec-8640-48a1-a9e1-d765fd8e5364" 00:25:56.686 ], 00:25:56.686 "product_name": "Malloc disk", 00:25:56.686 "block_size": 512, 00:25:56.686 "num_blocks": 65536, 00:25:56.686 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:25:56.686 "assigned_rate_limits": { 00:25:56.686 "rw_ios_per_sec": 0, 00:25:56.686 "rw_mbytes_per_sec": 0, 00:25:56.686 "r_mbytes_per_sec": 0, 00:25:56.686 "w_mbytes_per_sec": 0 00:25:56.686 }, 00:25:56.686 "claimed": false, 00:25:56.686 "zoned": false, 00:25:56.686 "supported_io_types": { 00:25:56.686 "read": true, 00:25:56.686 "write": true, 00:25:56.686 "unmap": true, 00:25:56.686 "flush": true, 00:25:56.686 "reset": true, 00:25:56.686 "nvme_admin": false, 00:25:56.686 "nvme_io": false, 00:25:56.686 "nvme_io_md": false, 00:25:56.686 "write_zeroes": true, 00:25:56.686 "zcopy": true, 00:25:56.686 "get_zone_info": false, 00:25:56.686 "zone_management": false, 00:25:56.686 "zone_append": false, 00:25:56.686 "compare": false, 00:25:56.686 "compare_and_write": false, 00:25:56.686 "abort": true, 00:25:56.686 "seek_hole": false, 00:25:56.686 "seek_data": false, 00:25:56.686 "copy": true, 00:25:56.686 "nvme_iov_md": false 00:25:56.686 }, 00:25:56.686 "memory_domains": [ 00:25:56.686 { 00:25:56.686 "dma_device_id": "system", 00:25:56.686 "dma_device_type": 1 00:25:56.686 }, 00:25:56.686 { 00:25:56.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:56.686 "dma_device_type": 2 00:25:56.686 } 00:25:56.686 ], 00:25:56.686 "driver_specific": {} 00:25:56.686 } 00:25:56.686 ] 00:25:56.686 16:42:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:56.686 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:56.686 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:56.686 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:57.254 [2024-07-24 16:42:53.892890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:57.254 [2024-07-24 16:42:53.892943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:57.254 [2024-07-24 16:42:53.892972] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:57.254 [2024-07-24 16:42:53.895265] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:57.254 [2024-07-24 16:42:53.895322] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.254 16:42:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.513 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.513 "name": "Existed_Raid", 00:25:57.513 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:25:57.513 "strip_size_kb": 0, 00:25:57.513 "state": "configuring", 00:25:57.513 "raid_level": "raid1", 00:25:57.513 "superblock": true, 00:25:57.513 "num_base_bdevs": 4, 00:25:57.513 "num_base_bdevs_discovered": 3, 00:25:57.513 "num_base_bdevs_operational": 4, 00:25:57.513 "base_bdevs_list": [ 00:25:57.513 { 00:25:57.513 "name": "BaseBdev1", 00:25:57.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.513 "is_configured": false, 00:25:57.513 "data_offset": 0, 00:25:57.513 "data_size": 0 00:25:57.513 }, 00:25:57.513 { 00:25:57.513 "name": "BaseBdev2", 00:25:57.513 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:25:57.513 "is_configured": true, 00:25:57.513 "data_offset": 2048, 00:25:57.513 "data_size": 63488 00:25:57.513 }, 00:25:57.513 { 00:25:57.513 "name": "BaseBdev3", 00:25:57.513 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:25:57.513 "is_configured": true, 00:25:57.513 "data_offset": 2048, 00:25:57.513 "data_size": 63488 00:25:57.513 }, 00:25:57.513 { 00:25:57.513 "name": "BaseBdev4", 00:25:57.513 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:25:57.513 "is_configured": true, 00:25:57.513 "data_offset": 2048, 00:25:57.513 "data_size": 63488 00:25:57.513 } 00:25:57.513 ] 00:25:57.513 }' 00:25:57.513 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.513 16:42:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.082 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:58.342 [2024-07-24 16:42:54.947697] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.342 16:42:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.342 16:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.342 "name": "Existed_Raid", 00:25:58.342 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:25:58.342 "strip_size_kb": 0, 00:25:58.342 "state": "configuring", 00:25:58.342 "raid_level": "raid1", 00:25:58.342 "superblock": true, 00:25:58.342 "num_base_bdevs": 4, 00:25:58.342 "num_base_bdevs_discovered": 2, 00:25:58.342 "num_base_bdevs_operational": 4, 00:25:58.342 "base_bdevs_list": [ 00:25:58.342 { 00:25:58.342 "name": "BaseBdev1", 00:25:58.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.342 "is_configured": false, 00:25:58.342 "data_offset": 0, 00:25:58.342 "data_size": 0 00:25:58.342 }, 00:25:58.342 { 00:25:58.342 "name": null, 00:25:58.342 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:25:58.342 "is_configured": false, 00:25:58.342 "data_offset": 2048, 00:25:58.342 "data_size": 63488 00:25:58.342 }, 00:25:58.342 { 00:25:58.342 "name": "BaseBdev3", 00:25:58.342 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:25:58.342 "is_configured": true, 00:25:58.342 "data_offset": 2048, 00:25:58.342 "data_size": 63488 00:25:58.342 }, 00:25:58.342 { 00:25:58.342 "name": "BaseBdev4", 00:25:58.342 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:25:58.342 "is_configured": true, 00:25:58.342 "data_offset": 2048, 00:25:58.342 "data_size": 63488 00:25:58.342 } 00:25:58.342 ] 00:25:58.342 }' 00:25:58.342 16:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.342 16:42:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.911 16:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.911 16:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:59.170 16:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:59.170 16:42:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:59.429 [2024-07-24 16:42:56.254180] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:59.429 BaseBdev1 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:59.429 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:59.688 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:59.948 [ 00:25:59.948 { 00:25:59.948 "name": "BaseBdev1", 00:25:59.948 "aliases": [ 00:25:59.948 "25a0c859-bb57-4ccd-ad15-8a90e07b547d" 00:25:59.948 ], 00:25:59.948 "product_name": "Malloc disk", 00:25:59.948 "block_size": 512, 00:25:59.948 "num_blocks": 65536, 00:25:59.948 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:25:59.948 "assigned_rate_limits": { 00:25:59.948 "rw_ios_per_sec": 0, 00:25:59.948 "rw_mbytes_per_sec": 0, 00:25:59.948 "r_mbytes_per_sec": 0, 00:25:59.948 "w_mbytes_per_sec": 0 00:25:59.948 }, 00:25:59.948 "claimed": true, 00:25:59.948 "claim_type": "exclusive_write", 00:25:59.948 "zoned": false, 00:25:59.948 "supported_io_types": { 00:25:59.948 "read": true, 00:25:59.948 "write": true, 00:25:59.948 "unmap": true, 00:25:59.948 "flush": true, 00:25:59.948 "reset": true, 00:25:59.948 "nvme_admin": false, 00:25:59.948 "nvme_io": false, 00:25:59.948 "nvme_io_md": false, 00:25:59.948 "write_zeroes": true, 00:25:59.948 "zcopy": true, 00:25:59.948 "get_zone_info": false, 00:25:59.948 "zone_management": false, 00:25:59.948 "zone_append": false, 00:25:59.948 "compare": false, 00:25:59.948 "compare_and_write": false, 00:25:59.948 "abort": true, 00:25:59.948 "seek_hole": false, 00:25:59.948 "seek_data": false, 00:25:59.948 "copy": true, 00:25:59.948 "nvme_iov_md": false 00:25:59.948 }, 00:25:59.948 "memory_domains": [ 00:25:59.948 { 00:25:59.948 "dma_device_id": "system", 00:25:59.948 "dma_device_type": 1 00:25:59.948 }, 00:25:59.948 { 00:25:59.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.948 "dma_device_type": 2 00:25:59.948 } 00:25:59.948 ], 00:25:59.948 "driver_specific": {} 00:25:59.948 } 00:25:59.948 ] 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.948 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.207 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.207 "name": "Existed_Raid", 00:26:00.207 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:00.207 "strip_size_kb": 0, 00:26:00.207 "state": "configuring", 00:26:00.207 "raid_level": "raid1", 00:26:00.207 "superblock": true, 00:26:00.207 "num_base_bdevs": 4, 00:26:00.207 "num_base_bdevs_discovered": 3, 00:26:00.207 "num_base_bdevs_operational": 4, 00:26:00.207 "base_bdevs_list": [ 00:26:00.207 { 00:26:00.207 "name": "BaseBdev1", 00:26:00.207 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:00.207 "is_configured": true, 00:26:00.207 "data_offset": 2048, 00:26:00.207 "data_size": 63488 00:26:00.207 }, 00:26:00.207 { 00:26:00.207 "name": null, 00:26:00.207 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:00.207 "is_configured": false, 00:26:00.207 "data_offset": 2048, 00:26:00.207 "data_size": 63488 00:26:00.207 }, 00:26:00.207 { 00:26:00.207 "name": "BaseBdev3", 00:26:00.207 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:00.207 "is_configured": true, 00:26:00.207 "data_offset": 2048, 00:26:00.207 "data_size": 63488 00:26:00.207 }, 00:26:00.207 { 00:26:00.207 "name": "BaseBdev4", 00:26:00.207 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:00.207 "is_configured": true, 00:26:00.207 "data_offset": 2048, 00:26:00.207 "data_size": 63488 00:26:00.207 } 00:26:00.207 ] 00:26:00.207 }' 00:26:00.207 16:42:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.207 16:42:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:00.775 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.775 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:01.033 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:01.033 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:01.293 [2024-07-24 16:42:57.954858] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.293 16:42:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.552 16:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.552 "name": "Existed_Raid", 00:26:01.552 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:01.552 "strip_size_kb": 0, 00:26:01.552 "state": "configuring", 00:26:01.552 "raid_level": "raid1", 00:26:01.552 "superblock": true, 00:26:01.552 "num_base_bdevs": 4, 00:26:01.552 "num_base_bdevs_discovered": 2, 00:26:01.552 "num_base_bdevs_operational": 4, 00:26:01.552 "base_bdevs_list": [ 00:26:01.552 { 00:26:01.552 "name": "BaseBdev1", 00:26:01.552 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:01.552 "is_configured": true, 00:26:01.552 "data_offset": 2048, 00:26:01.552 "data_size": 63488 00:26:01.552 }, 00:26:01.552 { 00:26:01.552 "name": null, 00:26:01.552 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:01.552 "is_configured": false, 00:26:01.552 "data_offset": 2048, 00:26:01.552 "data_size": 63488 00:26:01.552 }, 00:26:01.552 { 00:26:01.552 "name": null, 00:26:01.552 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:01.552 "is_configured": false, 00:26:01.552 "data_offset": 2048, 00:26:01.552 "data_size": 63488 00:26:01.552 }, 00:26:01.552 { 00:26:01.552 "name": "BaseBdev4", 00:26:01.552 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:01.552 "is_configured": true, 00:26:01.552 "data_offset": 2048, 00:26:01.552 "data_size": 63488 00:26:01.552 } 00:26:01.552 ] 00:26:01.552 }' 00:26:01.552 16:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.552 16:42:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.120 16:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:02.120 16:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.120 16:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:02.120 16:42:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:02.380 [2024-07-24 16:42:59.154103] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.380 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.639 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.639 "name": "Existed_Raid", 00:26:02.639 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:02.639 "strip_size_kb": 0, 00:26:02.639 "state": "configuring", 00:26:02.639 "raid_level": "raid1", 00:26:02.639 "superblock": true, 00:26:02.639 "num_base_bdevs": 4, 00:26:02.639 "num_base_bdevs_discovered": 3, 00:26:02.639 "num_base_bdevs_operational": 4, 00:26:02.639 "base_bdevs_list": [ 00:26:02.639 { 00:26:02.639 "name": "BaseBdev1", 00:26:02.639 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:02.639 "is_configured": true, 00:26:02.639 "data_offset": 2048, 00:26:02.639 "data_size": 63488 00:26:02.639 }, 00:26:02.639 { 00:26:02.639 "name": null, 00:26:02.639 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:02.639 "is_configured": false, 00:26:02.639 "data_offset": 2048, 00:26:02.639 "data_size": 63488 00:26:02.639 }, 00:26:02.639 { 00:26:02.639 "name": "BaseBdev3", 00:26:02.639 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:02.639 "is_configured": true, 00:26:02.639 "data_offset": 2048, 00:26:02.639 "data_size": 63488 00:26:02.639 }, 00:26:02.639 { 00:26:02.639 "name": "BaseBdev4", 00:26:02.639 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:02.639 "is_configured": true, 00:26:02.639 "data_offset": 2048, 00:26:02.639 "data_size": 63488 00:26:02.639 } 00:26:02.639 ] 00:26:02.639 }' 00:26:02.639 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.639 16:42:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.207 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.207 16:42:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:03.466 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:03.466 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:03.726 [2024-07-24 16:43:00.369589] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.726 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.985 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.985 "name": "Existed_Raid", 00:26:03.985 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:03.985 "strip_size_kb": 0, 00:26:03.985 "state": "configuring", 00:26:03.985 "raid_level": "raid1", 00:26:03.985 "superblock": true, 00:26:03.985 "num_base_bdevs": 4, 00:26:03.985 "num_base_bdevs_discovered": 2, 00:26:03.985 "num_base_bdevs_operational": 4, 00:26:03.985 "base_bdevs_list": [ 00:26:03.985 { 00:26:03.985 "name": null, 00:26:03.985 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:03.985 "is_configured": false, 00:26:03.985 "data_offset": 2048, 00:26:03.985 "data_size": 63488 00:26:03.985 }, 00:26:03.985 { 00:26:03.985 "name": null, 00:26:03.985 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:03.985 "is_configured": false, 00:26:03.985 "data_offset": 2048, 00:26:03.985 "data_size": 63488 00:26:03.985 }, 00:26:03.985 { 00:26:03.985 "name": "BaseBdev3", 00:26:03.985 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:03.985 "is_configured": true, 00:26:03.985 "data_offset": 2048, 00:26:03.985 "data_size": 63488 00:26:03.985 }, 00:26:03.985 { 00:26:03.985 "name": "BaseBdev4", 00:26:03.985 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:03.985 "is_configured": true, 00:26:03.985 "data_offset": 2048, 00:26:03.985 "data_size": 63488 00:26:03.985 } 00:26:03.985 ] 00:26:03.985 }' 00:26:03.985 16:43:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.985 16:43:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.579 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.579 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:04.861 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:04.861 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:04.861 [2024-07-24 16:43:01.722484] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.121 "name": "Existed_Raid", 00:26:05.121 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:05.121 "strip_size_kb": 0, 00:26:05.121 "state": "configuring", 00:26:05.121 "raid_level": "raid1", 00:26:05.121 "superblock": true, 00:26:05.121 "num_base_bdevs": 4, 00:26:05.121 "num_base_bdevs_discovered": 3, 00:26:05.121 "num_base_bdevs_operational": 4, 00:26:05.121 "base_bdevs_list": [ 00:26:05.121 { 00:26:05.121 "name": null, 00:26:05.121 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:05.121 "is_configured": false, 00:26:05.121 "data_offset": 2048, 00:26:05.121 "data_size": 63488 00:26:05.121 }, 00:26:05.121 { 00:26:05.121 "name": "BaseBdev2", 00:26:05.121 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:05.121 "is_configured": true, 00:26:05.121 "data_offset": 2048, 00:26:05.121 "data_size": 63488 00:26:05.121 }, 00:26:05.121 { 00:26:05.121 "name": "BaseBdev3", 00:26:05.121 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:05.121 "is_configured": true, 00:26:05.121 "data_offset": 2048, 00:26:05.121 "data_size": 63488 00:26:05.121 }, 00:26:05.121 { 00:26:05.121 "name": "BaseBdev4", 00:26:05.121 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:05.121 "is_configured": true, 00:26:05.121 "data_offset": 2048, 00:26:05.121 "data_size": 63488 00:26:05.121 } 00:26:05.121 ] 00:26:05.121 }' 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.121 16:43:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.058 16:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.058 16:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:06.058 16:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:06.058 16:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:06.058 16:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.318 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 25a0c859-bb57-4ccd-ad15-8a90e07b547d 00:26:06.578 [2024-07-24 16:43:03.282525] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:06.578 [2024-07-24 16:43:03.282797] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:26:06.578 [2024-07-24 16:43:03.282823] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:06.578 [2024-07-24 16:43:03.283134] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:26:06.578 [2024-07-24 16:43:03.283378] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:26:06.578 [2024-07-24 16:43:03.283393] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000042080 00:26:06.578 NewBaseBdev 00:26:06.578 [2024-07-24 16:43:03.283568] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:06.578 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:06.836 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:07.095 [ 00:26:07.095 { 00:26:07.095 "name": "NewBaseBdev", 00:26:07.095 "aliases": [ 00:26:07.095 "25a0c859-bb57-4ccd-ad15-8a90e07b547d" 00:26:07.095 ], 00:26:07.095 "product_name": "Malloc disk", 00:26:07.095 "block_size": 512, 00:26:07.095 "num_blocks": 65536, 00:26:07.095 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:07.095 "assigned_rate_limits": { 00:26:07.095 "rw_ios_per_sec": 0, 00:26:07.095 "rw_mbytes_per_sec": 0, 00:26:07.095 "r_mbytes_per_sec": 0, 00:26:07.095 "w_mbytes_per_sec": 0 00:26:07.095 }, 00:26:07.095 "claimed": true, 00:26:07.095 "claim_type": "exclusive_write", 00:26:07.095 "zoned": false, 00:26:07.095 "supported_io_types": { 00:26:07.095 "read": true, 00:26:07.095 "write": true, 00:26:07.095 "unmap": true, 00:26:07.095 "flush": true, 00:26:07.095 "reset": true, 00:26:07.095 "nvme_admin": false, 00:26:07.095 "nvme_io": false, 00:26:07.095 "nvme_io_md": false, 00:26:07.095 "write_zeroes": true, 00:26:07.095 "zcopy": true, 00:26:07.095 "get_zone_info": false, 00:26:07.095 "zone_management": false, 00:26:07.095 "zone_append": false, 00:26:07.095 "compare": false, 00:26:07.095 "compare_and_write": false, 00:26:07.095 "abort": true, 00:26:07.095 "seek_hole": false, 00:26:07.095 "seek_data": false, 00:26:07.095 "copy": true, 00:26:07.095 "nvme_iov_md": false 00:26:07.095 }, 00:26:07.095 "memory_domains": [ 00:26:07.095 { 00:26:07.095 "dma_device_id": "system", 00:26:07.095 "dma_device_type": 1 00:26:07.095 }, 00:26:07.095 { 00:26:07.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.095 "dma_device_type": 2 00:26:07.095 } 00:26:07.095 ], 00:26:07.095 "driver_specific": {} 00:26:07.095 } 00:26:07.095 ] 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.095 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.354 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.355 "name": "Existed_Raid", 00:26:07.355 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:07.355 "strip_size_kb": 0, 00:26:07.355 "state": "online", 00:26:07.355 "raid_level": "raid1", 00:26:07.355 "superblock": true, 00:26:07.355 "num_base_bdevs": 4, 00:26:07.355 "num_base_bdevs_discovered": 4, 00:26:07.355 "num_base_bdevs_operational": 4, 00:26:07.355 "base_bdevs_list": [ 00:26:07.355 { 00:26:07.355 "name": "NewBaseBdev", 00:26:07.355 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:07.355 "is_configured": true, 00:26:07.355 "data_offset": 2048, 00:26:07.355 "data_size": 63488 00:26:07.355 }, 00:26:07.355 { 00:26:07.355 "name": "BaseBdev2", 00:26:07.355 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:07.355 "is_configured": true, 00:26:07.355 "data_offset": 2048, 00:26:07.355 "data_size": 63488 00:26:07.355 }, 00:26:07.355 { 00:26:07.355 "name": "BaseBdev3", 00:26:07.355 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:07.355 "is_configured": true, 00:26:07.355 "data_offset": 2048, 00:26:07.355 "data_size": 63488 00:26:07.355 }, 00:26:07.355 { 00:26:07.355 "name": "BaseBdev4", 00:26:07.355 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:07.355 "is_configured": true, 00:26:07.355 "data_offset": 2048, 00:26:07.355 "data_size": 63488 00:26:07.355 } 00:26:07.355 ] 00:26:07.355 }' 00:26:07.355 16:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.355 16:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:07.923 [2024-07-24 16:43:04.674845] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:07.923 "name": "Existed_Raid", 00:26:07.923 "aliases": [ 00:26:07.923 "5650b0bd-83c7-4e34-bccd-fb2933e0440e" 00:26:07.923 ], 00:26:07.923 "product_name": "Raid Volume", 00:26:07.923 "block_size": 512, 00:26:07.923 "num_blocks": 63488, 00:26:07.923 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:07.923 "assigned_rate_limits": { 00:26:07.923 "rw_ios_per_sec": 0, 00:26:07.923 "rw_mbytes_per_sec": 0, 00:26:07.923 "r_mbytes_per_sec": 0, 00:26:07.923 "w_mbytes_per_sec": 0 00:26:07.923 }, 00:26:07.923 "claimed": false, 00:26:07.923 "zoned": false, 00:26:07.923 "supported_io_types": { 00:26:07.923 "read": true, 00:26:07.923 "write": true, 00:26:07.923 "unmap": false, 00:26:07.923 "flush": false, 00:26:07.923 "reset": true, 00:26:07.923 "nvme_admin": false, 00:26:07.923 "nvme_io": false, 00:26:07.923 "nvme_io_md": false, 00:26:07.923 "write_zeroes": true, 00:26:07.923 "zcopy": false, 00:26:07.923 "get_zone_info": false, 00:26:07.923 "zone_management": false, 00:26:07.923 "zone_append": false, 00:26:07.923 "compare": false, 00:26:07.923 "compare_and_write": false, 00:26:07.923 "abort": false, 00:26:07.923 "seek_hole": false, 00:26:07.923 "seek_data": false, 00:26:07.923 "copy": false, 00:26:07.923 "nvme_iov_md": false 00:26:07.923 }, 00:26:07.923 "memory_domains": [ 00:26:07.923 { 00:26:07.923 "dma_device_id": "system", 00:26:07.923 "dma_device_type": 1 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.923 "dma_device_type": 2 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "system", 00:26:07.923 "dma_device_type": 1 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.923 "dma_device_type": 2 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "system", 00:26:07.923 "dma_device_type": 1 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.923 "dma_device_type": 2 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "system", 00:26:07.923 "dma_device_type": 1 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.923 "dma_device_type": 2 00:26:07.923 } 00:26:07.923 ], 00:26:07.923 "driver_specific": { 00:26:07.923 "raid": { 00:26:07.923 "uuid": "5650b0bd-83c7-4e34-bccd-fb2933e0440e", 00:26:07.923 "strip_size_kb": 0, 00:26:07.923 "state": "online", 00:26:07.923 "raid_level": "raid1", 00:26:07.923 "superblock": true, 00:26:07.923 "num_base_bdevs": 4, 00:26:07.923 "num_base_bdevs_discovered": 4, 00:26:07.923 "num_base_bdevs_operational": 4, 00:26:07.923 "base_bdevs_list": [ 00:26:07.923 { 00:26:07.923 "name": "NewBaseBdev", 00:26:07.923 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:07.923 "is_configured": true, 00:26:07.923 "data_offset": 2048, 00:26:07.923 "data_size": 63488 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "name": "BaseBdev2", 00:26:07.923 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:07.923 "is_configured": true, 00:26:07.923 "data_offset": 2048, 00:26:07.923 "data_size": 63488 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "name": "BaseBdev3", 00:26:07.923 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:07.923 "is_configured": true, 00:26:07.923 "data_offset": 2048, 00:26:07.923 "data_size": 63488 00:26:07.923 }, 00:26:07.923 { 00:26:07.923 "name": "BaseBdev4", 00:26:07.923 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:07.923 "is_configured": true, 00:26:07.923 "data_offset": 2048, 00:26:07.923 "data_size": 63488 00:26:07.923 } 00:26:07.923 ] 00:26:07.923 } 00:26:07.923 } 00:26:07.923 }' 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:07.923 BaseBdev2 00:26:07.923 BaseBdev3 00:26:07.923 BaseBdev4' 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:07.923 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.181 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.181 "name": "NewBaseBdev", 00:26:08.181 "aliases": [ 00:26:08.181 "25a0c859-bb57-4ccd-ad15-8a90e07b547d" 00:26:08.181 ], 00:26:08.181 "product_name": "Malloc disk", 00:26:08.181 "block_size": 512, 00:26:08.181 "num_blocks": 65536, 00:26:08.181 "uuid": "25a0c859-bb57-4ccd-ad15-8a90e07b547d", 00:26:08.181 "assigned_rate_limits": { 00:26:08.181 "rw_ios_per_sec": 0, 00:26:08.181 "rw_mbytes_per_sec": 0, 00:26:08.181 "r_mbytes_per_sec": 0, 00:26:08.181 "w_mbytes_per_sec": 0 00:26:08.181 }, 00:26:08.181 "claimed": true, 00:26:08.181 "claim_type": "exclusive_write", 00:26:08.181 "zoned": false, 00:26:08.181 "supported_io_types": { 00:26:08.181 "read": true, 00:26:08.181 "write": true, 00:26:08.181 "unmap": true, 00:26:08.182 "flush": true, 00:26:08.182 "reset": true, 00:26:08.182 "nvme_admin": false, 00:26:08.182 "nvme_io": false, 00:26:08.182 "nvme_io_md": false, 00:26:08.182 "write_zeroes": true, 00:26:08.182 "zcopy": true, 00:26:08.182 "get_zone_info": false, 00:26:08.182 "zone_management": false, 00:26:08.182 "zone_append": false, 00:26:08.182 "compare": false, 00:26:08.182 "compare_and_write": false, 00:26:08.182 "abort": true, 00:26:08.182 "seek_hole": false, 00:26:08.182 "seek_data": false, 00:26:08.182 "copy": true, 00:26:08.182 "nvme_iov_md": false 00:26:08.182 }, 00:26:08.182 "memory_domains": [ 00:26:08.182 { 00:26:08.182 "dma_device_id": "system", 00:26:08.182 "dma_device_type": 1 00:26:08.182 }, 00:26:08.182 { 00:26:08.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.182 "dma_device_type": 2 00:26:08.182 } 00:26:08.182 ], 00:26:08.182 "driver_specific": {} 00:26:08.182 }' 00:26:08.182 16:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.182 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.440 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.699 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:08.699 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.699 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:08.699 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.699 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.699 "name": "BaseBdev2", 00:26:08.699 "aliases": [ 00:26:08.699 "ca75682e-b602-4a80-94b4-1069f54c50aa" 00:26:08.699 ], 00:26:08.699 "product_name": "Malloc disk", 00:26:08.699 "block_size": 512, 00:26:08.699 "num_blocks": 65536, 00:26:08.699 "uuid": "ca75682e-b602-4a80-94b4-1069f54c50aa", 00:26:08.699 "assigned_rate_limits": { 00:26:08.699 "rw_ios_per_sec": 0, 00:26:08.699 "rw_mbytes_per_sec": 0, 00:26:08.699 "r_mbytes_per_sec": 0, 00:26:08.699 "w_mbytes_per_sec": 0 00:26:08.699 }, 00:26:08.699 "claimed": true, 00:26:08.699 "claim_type": "exclusive_write", 00:26:08.699 "zoned": false, 00:26:08.699 "supported_io_types": { 00:26:08.699 "read": true, 00:26:08.699 "write": true, 00:26:08.699 "unmap": true, 00:26:08.699 "flush": true, 00:26:08.699 "reset": true, 00:26:08.699 "nvme_admin": false, 00:26:08.699 "nvme_io": false, 00:26:08.699 "nvme_io_md": false, 00:26:08.699 "write_zeroes": true, 00:26:08.699 "zcopy": true, 00:26:08.699 "get_zone_info": false, 00:26:08.699 "zone_management": false, 00:26:08.699 "zone_append": false, 00:26:08.699 "compare": false, 00:26:08.699 "compare_and_write": false, 00:26:08.699 "abort": true, 00:26:08.699 "seek_hole": false, 00:26:08.699 "seek_data": false, 00:26:08.699 "copy": true, 00:26:08.699 "nvme_iov_md": false 00:26:08.699 }, 00:26:08.699 "memory_domains": [ 00:26:08.699 { 00:26:08.699 "dma_device_id": "system", 00:26:08.699 "dma_device_type": 1 00:26:08.699 }, 00:26:08.699 { 00:26:08.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.699 "dma_device_type": 2 00:26:08.699 } 00:26:08.699 ], 00:26:08.699 "driver_specific": {} 00:26:08.699 }' 00:26:08.699 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.958 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.276 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.276 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.276 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.276 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:09.276 16:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:09.276 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:09.276 "name": "BaseBdev3", 00:26:09.276 "aliases": [ 00:26:09.276 "8bfa2a84-5e28-489a-b40f-17a9c5e1a834" 00:26:09.276 ], 00:26:09.276 "product_name": "Malloc disk", 00:26:09.276 "block_size": 512, 00:26:09.276 "num_blocks": 65536, 00:26:09.276 "uuid": "8bfa2a84-5e28-489a-b40f-17a9c5e1a834", 00:26:09.276 "assigned_rate_limits": { 00:26:09.276 "rw_ios_per_sec": 0, 00:26:09.276 "rw_mbytes_per_sec": 0, 00:26:09.276 "r_mbytes_per_sec": 0, 00:26:09.276 "w_mbytes_per_sec": 0 00:26:09.276 }, 00:26:09.276 "claimed": true, 00:26:09.276 "claim_type": "exclusive_write", 00:26:09.276 "zoned": false, 00:26:09.276 "supported_io_types": { 00:26:09.276 "read": true, 00:26:09.276 "write": true, 00:26:09.276 "unmap": true, 00:26:09.276 "flush": true, 00:26:09.276 "reset": true, 00:26:09.276 "nvme_admin": false, 00:26:09.276 "nvme_io": false, 00:26:09.276 "nvme_io_md": false, 00:26:09.276 "write_zeroes": true, 00:26:09.276 "zcopy": true, 00:26:09.276 "get_zone_info": false, 00:26:09.276 "zone_management": false, 00:26:09.276 "zone_append": false, 00:26:09.276 "compare": false, 00:26:09.276 "compare_and_write": false, 00:26:09.276 "abort": true, 00:26:09.276 "seek_hole": false, 00:26:09.276 "seek_data": false, 00:26:09.276 "copy": true, 00:26:09.276 "nvme_iov_md": false 00:26:09.276 }, 00:26:09.276 "memory_domains": [ 00:26:09.276 { 00:26:09.276 "dma_device_id": "system", 00:26:09.276 "dma_device_type": 1 00:26:09.276 }, 00:26:09.276 { 00:26:09.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.276 "dma_device_type": 2 00:26:09.276 } 00:26:09.276 ], 00:26:09.276 "driver_specific": {} 00:26:09.276 }' 00:26:09.276 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.534 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.535 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.793 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.793 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:09.793 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:09.793 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:10.052 "name": "BaseBdev4", 00:26:10.052 "aliases": [ 00:26:10.052 "8773adec-8640-48a1-a9e1-d765fd8e5364" 00:26:10.052 ], 00:26:10.052 "product_name": "Malloc disk", 00:26:10.052 "block_size": 512, 00:26:10.052 "num_blocks": 65536, 00:26:10.052 "uuid": "8773adec-8640-48a1-a9e1-d765fd8e5364", 00:26:10.052 "assigned_rate_limits": { 00:26:10.052 "rw_ios_per_sec": 0, 00:26:10.052 "rw_mbytes_per_sec": 0, 00:26:10.052 "r_mbytes_per_sec": 0, 00:26:10.052 "w_mbytes_per_sec": 0 00:26:10.052 }, 00:26:10.052 "claimed": true, 00:26:10.052 "claim_type": "exclusive_write", 00:26:10.052 "zoned": false, 00:26:10.052 "supported_io_types": { 00:26:10.052 "read": true, 00:26:10.052 "write": true, 00:26:10.052 "unmap": true, 00:26:10.052 "flush": true, 00:26:10.052 "reset": true, 00:26:10.052 "nvme_admin": false, 00:26:10.052 "nvme_io": false, 00:26:10.052 "nvme_io_md": false, 00:26:10.052 "write_zeroes": true, 00:26:10.052 "zcopy": true, 00:26:10.052 "get_zone_info": false, 00:26:10.052 "zone_management": false, 00:26:10.052 "zone_append": false, 00:26:10.052 "compare": false, 00:26:10.052 "compare_and_write": false, 00:26:10.052 "abort": true, 00:26:10.052 "seek_hole": false, 00:26:10.052 "seek_data": false, 00:26:10.052 "copy": true, 00:26:10.052 "nvme_iov_md": false 00:26:10.052 }, 00:26:10.052 "memory_domains": [ 00:26:10.052 { 00:26:10.052 "dma_device_id": "system", 00:26:10.052 "dma_device_type": 1 00:26:10.052 }, 00:26:10.052 { 00:26:10.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.052 "dma_device_type": 2 00:26:10.052 } 00:26:10.052 ], 00:26:10.052 "driver_specific": {} 00:26:10.052 }' 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.052 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:10.311 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:10.311 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.311 16:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:10.311 16:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:10.311 16:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:10.569 [2024-07-24 16:43:07.209343] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:10.569 [2024-07-24 16:43:07.209375] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:10.569 [2024-07-24 16:43:07.209456] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:10.569 [2024-07-24 16:43:07.209796] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:10.569 [2024-07-24 16:43:07.209816] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name Existed_Raid, state offline 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 1725045 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1725045 ']' 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 1725045 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1725045 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1725045' 00:26:10.569 killing process with pid 1725045 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 1725045 00:26:10.569 [2024-07-24 16:43:07.283834] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:10.569 16:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 1725045 00:26:11.136 [2024-07-24 16:43:07.733221] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:13.039 16:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:13.039 00:26:13.039 real 0m35.452s 00:26:13.039 user 1m2.303s 00:26:13.039 sys 0m5.983s 00:26:13.039 16:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:13.039 16:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.039 ************************************ 00:26:13.039 END TEST raid_state_function_test_sb 00:26:13.039 ************************************ 00:26:13.039 16:43:09 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:13.039 16:43:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:13.039 16:43:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:13.039 16:43:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:13.039 ************************************ 00:26:13.039 START TEST raid_superblock_test 00:26:13.039 ************************************ 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=1731532 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 1731532 /var/tmp/spdk-raid.sock 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 1731532 ']' 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:13.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.039 16:43:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.039 [2024-07-24 16:43:09.613191] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:26:13.039 [2024-07-24 16:43:09.613310] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731532 ] 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:13.039 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:13.039 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:13.039 [2024-07-24 16:43:09.841222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.298 [2024-07-24 16:43:10.129663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.864 [2024-07-24 16:43:10.445441] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.864 [2024-07-24 16:43:10.445474] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:13.864 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:14.122 malloc1 00:26:14.123 16:43:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:14.381 [2024-07-24 16:43:11.095356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:14.381 [2024-07-24 16:43:11.095416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.381 [2024-07-24 16:43:11.095446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:26:14.381 [2024-07-24 16:43:11.095463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.381 [2024-07-24 16:43:11.098212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.381 [2024-07-24 16:43:11.098248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:14.381 pt1 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:14.381 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:14.640 malloc2 00:26:14.640 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:14.899 [2024-07-24 16:43:11.583044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:14.899 [2024-07-24 16:43:11.583100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.899 [2024-07-24 16:43:11.583128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:26:14.899 [2024-07-24 16:43:11.583152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.899 [2024-07-24 16:43:11.585902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.899 [2024-07-24 16:43:11.585942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:14.899 pt2 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:14.899 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:15.158 malloc3 00:26:15.158 16:43:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:15.417 [2024-07-24 16:43:12.075092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:15.417 [2024-07-24 16:43:12.075160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.417 [2024-07-24 16:43:12.075193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:26:15.417 [2024-07-24 16:43:12.075210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.417 [2024-07-24 16:43:12.077931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.417 [2024-07-24 16:43:12.077968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:15.417 pt3 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:15.417 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:15.676 malloc4 00:26:15.676 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:15.934 [2024-07-24 16:43:12.576275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:15.934 [2024-07-24 16:43:12.576339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.934 [2024-07-24 16:43:12.576370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:26:15.934 [2024-07-24 16:43:12.576387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.934 [2024-07-24 16:43:12.579133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.934 [2024-07-24 16:43:12.579177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:15.934 pt4 00:26:15.934 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:15.934 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:15.934 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:16.192 [2024-07-24 16:43:12.804981] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:16.192 [2024-07-24 16:43:12.807297] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:16.192 [2024-07-24 16:43:12.807389] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:16.192 [2024-07-24 16:43:12.807446] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:16.192 [2024-07-24 16:43:12.807695] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042080 00:26:16.192 [2024-07-24 16:43:12.807715] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:16.192 [2024-07-24 16:43:12.808059] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:16.192 [2024-07-24 16:43:12.808345] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042080 00:26:16.192 [2024-07-24 16:43:12.808365] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042080 00:26:16.192 [2024-07-24 16:43:12.808576] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.192 16:43:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.192 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.192 "name": "raid_bdev1", 00:26:16.192 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:16.192 "strip_size_kb": 0, 00:26:16.192 "state": "online", 00:26:16.192 "raid_level": "raid1", 00:26:16.192 "superblock": true, 00:26:16.192 "num_base_bdevs": 4, 00:26:16.192 "num_base_bdevs_discovered": 4, 00:26:16.192 "num_base_bdevs_operational": 4, 00:26:16.192 "base_bdevs_list": [ 00:26:16.192 { 00:26:16.192 "name": "pt1", 00:26:16.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:16.192 "is_configured": true, 00:26:16.192 "data_offset": 2048, 00:26:16.192 "data_size": 63488 00:26:16.192 }, 00:26:16.192 { 00:26:16.192 "name": "pt2", 00:26:16.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:16.192 "is_configured": true, 00:26:16.192 "data_offset": 2048, 00:26:16.192 "data_size": 63488 00:26:16.192 }, 00:26:16.192 { 00:26:16.192 "name": "pt3", 00:26:16.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:16.192 "is_configured": true, 00:26:16.192 "data_offset": 2048, 00:26:16.192 "data_size": 63488 00:26:16.192 }, 00:26:16.192 { 00:26:16.192 "name": "pt4", 00:26:16.192 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:16.192 "is_configured": true, 00:26:16.192 "data_offset": 2048, 00:26:16.192 "data_size": 63488 00:26:16.192 } 00:26:16.192 ] 00:26:16.192 }' 00:26:16.192 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.192 16:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:17.126 [2024-07-24 16:43:13.836102] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:17.126 "name": "raid_bdev1", 00:26:17.126 "aliases": [ 00:26:17.126 "78543c8b-61d2-4549-abb4-4108c0ed550c" 00:26:17.126 ], 00:26:17.126 "product_name": "Raid Volume", 00:26:17.126 "block_size": 512, 00:26:17.126 "num_blocks": 63488, 00:26:17.126 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:17.126 "assigned_rate_limits": { 00:26:17.126 "rw_ios_per_sec": 0, 00:26:17.126 "rw_mbytes_per_sec": 0, 00:26:17.126 "r_mbytes_per_sec": 0, 00:26:17.126 "w_mbytes_per_sec": 0 00:26:17.126 }, 00:26:17.126 "claimed": false, 00:26:17.126 "zoned": false, 00:26:17.126 "supported_io_types": { 00:26:17.126 "read": true, 00:26:17.126 "write": true, 00:26:17.126 "unmap": false, 00:26:17.126 "flush": false, 00:26:17.126 "reset": true, 00:26:17.126 "nvme_admin": false, 00:26:17.126 "nvme_io": false, 00:26:17.126 "nvme_io_md": false, 00:26:17.126 "write_zeroes": true, 00:26:17.126 "zcopy": false, 00:26:17.126 "get_zone_info": false, 00:26:17.126 "zone_management": false, 00:26:17.126 "zone_append": false, 00:26:17.126 "compare": false, 00:26:17.126 "compare_and_write": false, 00:26:17.126 "abort": false, 00:26:17.126 "seek_hole": false, 00:26:17.126 "seek_data": false, 00:26:17.126 "copy": false, 00:26:17.126 "nvme_iov_md": false 00:26:17.126 }, 00:26:17.126 "memory_domains": [ 00:26:17.126 { 00:26:17.126 "dma_device_id": "system", 00:26:17.126 "dma_device_type": 1 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.126 "dma_device_type": 2 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "system", 00:26:17.126 "dma_device_type": 1 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.126 "dma_device_type": 2 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "system", 00:26:17.126 "dma_device_type": 1 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.126 "dma_device_type": 2 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "system", 00:26:17.126 "dma_device_type": 1 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.126 "dma_device_type": 2 00:26:17.126 } 00:26:17.126 ], 00:26:17.126 "driver_specific": { 00:26:17.126 "raid": { 00:26:17.126 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:17.126 "strip_size_kb": 0, 00:26:17.126 "state": "online", 00:26:17.126 "raid_level": "raid1", 00:26:17.126 "superblock": true, 00:26:17.126 "num_base_bdevs": 4, 00:26:17.126 "num_base_bdevs_discovered": 4, 00:26:17.126 "num_base_bdevs_operational": 4, 00:26:17.126 "base_bdevs_list": [ 00:26:17.126 { 00:26:17.126 "name": "pt1", 00:26:17.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.126 "is_configured": true, 00:26:17.126 "data_offset": 2048, 00:26:17.126 "data_size": 63488 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "name": "pt2", 00:26:17.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.126 "is_configured": true, 00:26:17.126 "data_offset": 2048, 00:26:17.126 "data_size": 63488 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "name": "pt3", 00:26:17.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:17.126 "is_configured": true, 00:26:17.126 "data_offset": 2048, 00:26:17.126 "data_size": 63488 00:26:17.126 }, 00:26:17.126 { 00:26:17.126 "name": "pt4", 00:26:17.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:17.126 "is_configured": true, 00:26:17.126 "data_offset": 2048, 00:26:17.126 "data_size": 63488 00:26:17.126 } 00:26:17.126 ] 00:26:17.126 } 00:26:17.126 } 00:26:17.126 }' 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:17.126 pt2 00:26:17.126 pt3 00:26:17.126 pt4' 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.126 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:17.127 16:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.385 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.385 "name": "pt1", 00:26:17.385 "aliases": [ 00:26:17.385 "00000000-0000-0000-0000-000000000001" 00:26:17.385 ], 00:26:17.385 "product_name": "passthru", 00:26:17.385 "block_size": 512, 00:26:17.385 "num_blocks": 65536, 00:26:17.385 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.385 "assigned_rate_limits": { 00:26:17.385 "rw_ios_per_sec": 0, 00:26:17.385 "rw_mbytes_per_sec": 0, 00:26:17.385 "r_mbytes_per_sec": 0, 00:26:17.385 "w_mbytes_per_sec": 0 00:26:17.385 }, 00:26:17.385 "claimed": true, 00:26:17.385 "claim_type": "exclusive_write", 00:26:17.385 "zoned": false, 00:26:17.385 "supported_io_types": { 00:26:17.385 "read": true, 00:26:17.385 "write": true, 00:26:17.385 "unmap": true, 00:26:17.385 "flush": true, 00:26:17.385 "reset": true, 00:26:17.385 "nvme_admin": false, 00:26:17.385 "nvme_io": false, 00:26:17.385 "nvme_io_md": false, 00:26:17.385 "write_zeroes": true, 00:26:17.385 "zcopy": true, 00:26:17.385 "get_zone_info": false, 00:26:17.385 "zone_management": false, 00:26:17.385 "zone_append": false, 00:26:17.385 "compare": false, 00:26:17.385 "compare_and_write": false, 00:26:17.385 "abort": true, 00:26:17.385 "seek_hole": false, 00:26:17.385 "seek_data": false, 00:26:17.385 "copy": true, 00:26:17.385 "nvme_iov_md": false 00:26:17.385 }, 00:26:17.385 "memory_domains": [ 00:26:17.385 { 00:26:17.385 "dma_device_id": "system", 00:26:17.385 "dma_device_type": 1 00:26:17.385 }, 00:26:17.385 { 00:26:17.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.385 "dma_device_type": 2 00:26:17.385 } 00:26:17.385 ], 00:26:17.385 "driver_specific": { 00:26:17.385 "passthru": { 00:26:17.385 "name": "pt1", 00:26:17.385 "base_bdev_name": "malloc1" 00:26:17.385 } 00:26:17.385 } 00:26:17.385 }' 00:26:17.385 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.385 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.385 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.385 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:17.643 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.902 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.902 "name": "pt2", 00:26:17.902 "aliases": [ 00:26:17.902 "00000000-0000-0000-0000-000000000002" 00:26:17.902 ], 00:26:17.902 "product_name": "passthru", 00:26:17.902 "block_size": 512, 00:26:17.902 "num_blocks": 65536, 00:26:17.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.902 "assigned_rate_limits": { 00:26:17.902 "rw_ios_per_sec": 0, 00:26:17.902 "rw_mbytes_per_sec": 0, 00:26:17.902 "r_mbytes_per_sec": 0, 00:26:17.902 "w_mbytes_per_sec": 0 00:26:17.902 }, 00:26:17.902 "claimed": true, 00:26:17.902 "claim_type": "exclusive_write", 00:26:17.902 "zoned": false, 00:26:17.902 "supported_io_types": { 00:26:17.902 "read": true, 00:26:17.902 "write": true, 00:26:17.902 "unmap": true, 00:26:17.902 "flush": true, 00:26:17.902 "reset": true, 00:26:17.902 "nvme_admin": false, 00:26:17.902 "nvme_io": false, 00:26:17.902 "nvme_io_md": false, 00:26:17.902 "write_zeroes": true, 00:26:17.902 "zcopy": true, 00:26:17.902 "get_zone_info": false, 00:26:17.902 "zone_management": false, 00:26:17.902 "zone_append": false, 00:26:17.902 "compare": false, 00:26:17.902 "compare_and_write": false, 00:26:17.902 "abort": true, 00:26:17.902 "seek_hole": false, 00:26:17.902 "seek_data": false, 00:26:17.902 "copy": true, 00:26:17.902 "nvme_iov_md": false 00:26:17.902 }, 00:26:17.902 "memory_domains": [ 00:26:17.902 { 00:26:17.902 "dma_device_id": "system", 00:26:17.902 "dma_device_type": 1 00:26:17.902 }, 00:26:17.902 { 00:26:17.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.902 "dma_device_type": 2 00:26:17.902 } 00:26:17.902 ], 00:26:17.902 "driver_specific": { 00:26:17.902 "passthru": { 00:26:17.902 "name": "pt2", 00:26:17.902 "base_bdev_name": "malloc2" 00:26:17.902 } 00:26:17.902 } 00:26:17.902 }' 00:26:17.902 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.902 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.192 16:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.451 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.451 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.451 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.451 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:18.451 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.451 "name": "pt3", 00:26:18.451 "aliases": [ 00:26:18.451 "00000000-0000-0000-0000-000000000003" 00:26:18.451 ], 00:26:18.451 "product_name": "passthru", 00:26:18.451 "block_size": 512, 00:26:18.451 "num_blocks": 65536, 00:26:18.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:18.451 "assigned_rate_limits": { 00:26:18.451 "rw_ios_per_sec": 0, 00:26:18.451 "rw_mbytes_per_sec": 0, 00:26:18.451 "r_mbytes_per_sec": 0, 00:26:18.451 "w_mbytes_per_sec": 0 00:26:18.451 }, 00:26:18.451 "claimed": true, 00:26:18.451 "claim_type": "exclusive_write", 00:26:18.451 "zoned": false, 00:26:18.451 "supported_io_types": { 00:26:18.451 "read": true, 00:26:18.451 "write": true, 00:26:18.451 "unmap": true, 00:26:18.451 "flush": true, 00:26:18.451 "reset": true, 00:26:18.451 "nvme_admin": false, 00:26:18.451 "nvme_io": false, 00:26:18.451 "nvme_io_md": false, 00:26:18.451 "write_zeroes": true, 00:26:18.451 "zcopy": true, 00:26:18.451 "get_zone_info": false, 00:26:18.451 "zone_management": false, 00:26:18.451 "zone_append": false, 00:26:18.451 "compare": false, 00:26:18.451 "compare_and_write": false, 00:26:18.451 "abort": true, 00:26:18.451 "seek_hole": false, 00:26:18.451 "seek_data": false, 00:26:18.451 "copy": true, 00:26:18.451 "nvme_iov_md": false 00:26:18.451 }, 00:26:18.451 "memory_domains": [ 00:26:18.451 { 00:26:18.451 "dma_device_id": "system", 00:26:18.451 "dma_device_type": 1 00:26:18.451 }, 00:26:18.451 { 00:26:18.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.451 "dma_device_type": 2 00:26:18.451 } 00:26:18.451 ], 00:26:18.451 "driver_specific": { 00:26:18.451 "passthru": { 00:26:18.451 "name": "pt3", 00:26:18.451 "base_bdev_name": "malloc3" 00:26:18.451 } 00:26:18.451 } 00:26:18.451 }' 00:26:18.451 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.709 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.709 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.709 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.709 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.710 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.710 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.710 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.710 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.710 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.968 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.968 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.968 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.968 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:18.968 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:19.228 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:19.228 "name": "pt4", 00:26:19.228 "aliases": [ 00:26:19.228 "00000000-0000-0000-0000-000000000004" 00:26:19.228 ], 00:26:19.228 "product_name": "passthru", 00:26:19.228 "block_size": 512, 00:26:19.228 "num_blocks": 65536, 00:26:19.228 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:19.228 "assigned_rate_limits": { 00:26:19.228 "rw_ios_per_sec": 0, 00:26:19.228 "rw_mbytes_per_sec": 0, 00:26:19.228 "r_mbytes_per_sec": 0, 00:26:19.228 "w_mbytes_per_sec": 0 00:26:19.228 }, 00:26:19.228 "claimed": true, 00:26:19.228 "claim_type": "exclusive_write", 00:26:19.228 "zoned": false, 00:26:19.228 "supported_io_types": { 00:26:19.228 "read": true, 00:26:19.228 "write": true, 00:26:19.228 "unmap": true, 00:26:19.228 "flush": true, 00:26:19.228 "reset": true, 00:26:19.228 "nvme_admin": false, 00:26:19.228 "nvme_io": false, 00:26:19.228 "nvme_io_md": false, 00:26:19.228 "write_zeroes": true, 00:26:19.228 "zcopy": true, 00:26:19.228 "get_zone_info": false, 00:26:19.228 "zone_management": false, 00:26:19.228 "zone_append": false, 00:26:19.228 "compare": false, 00:26:19.228 "compare_and_write": false, 00:26:19.228 "abort": true, 00:26:19.228 "seek_hole": false, 00:26:19.228 "seek_data": false, 00:26:19.228 "copy": true, 00:26:19.228 "nvme_iov_md": false 00:26:19.228 }, 00:26:19.228 "memory_domains": [ 00:26:19.228 { 00:26:19.228 "dma_device_id": "system", 00:26:19.228 "dma_device_type": 1 00:26:19.228 }, 00:26:19.228 { 00:26:19.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.228 "dma_device_type": 2 00:26:19.228 } 00:26:19.228 ], 00:26:19.228 "driver_specific": { 00:26:19.228 "passthru": { 00:26:19.228 "name": "pt4", 00:26:19.228 "base_bdev_name": "malloc4" 00:26:19.228 } 00:26:19.228 } 00:26:19.228 }' 00:26:19.228 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.228 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.228 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:19.228 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.228 16:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.228 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:19.228 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.228 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.228 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.228 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.487 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.487 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.487 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:19.487 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:19.746 [2024-07-24 16:43:16.370955] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:19.746 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=78543c8b-61d2-4549-abb4-4108c0ed550c 00:26:19.746 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 78543c8b-61d2-4549-abb4-4108c0ed550c ']' 00:26:19.746 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:19.746 [2024-07-24 16:43:16.547032] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:19.746 [2024-07-24 16:43:16.547060] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:19.746 [2024-07-24 16:43:16.547146] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:19.746 [2024-07-24 16:43:16.547248] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:19.746 [2024-07-24 16:43:16.547271] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042080 name raid_bdev1, state offline 00:26:19.746 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:19.746 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.006 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:20.006 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:20.006 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.006 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:20.265 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.265 16:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:20.524 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.524 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:20.524 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:20.524 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:20.784 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:21.044 [2024-07-24 16:43:17.790325] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:21.044 [2024-07-24 16:43:17.792635] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:21.044 [2024-07-24 16:43:17.792693] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:21.044 [2024-07-24 16:43:17.792739] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:21.044 [2024-07-24 16:43:17.792792] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:21.044 [2024-07-24 16:43:17.792843] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:21.044 [2024-07-24 16:43:17.792873] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:21.044 [2024-07-24 16:43:17.792903] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:21.044 [2024-07-24 16:43:17.792925] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:21.044 [2024-07-24 16:43:17.792942] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state configuring 00:26:21.044 request: 00:26:21.044 { 00:26:21.044 "name": "raid_bdev1", 00:26:21.044 "raid_level": "raid1", 00:26:21.044 "base_bdevs": [ 00:26:21.044 "malloc1", 00:26:21.044 "malloc2", 00:26:21.044 "malloc3", 00:26:21.044 "malloc4" 00:26:21.044 ], 00:26:21.044 "superblock": false, 00:26:21.044 "method": "bdev_raid_create", 00:26:21.044 "req_id": 1 00:26:21.044 } 00:26:21.044 Got JSON-RPC error response 00:26:21.044 response: 00:26:21.044 { 00:26:21.044 "code": -17, 00:26:21.044 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:21.044 } 00:26:21.044 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:21.044 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:21.044 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:21.044 16:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:21.044 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.044 16:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:21.303 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:21.303 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:21.303 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:21.563 [2024-07-24 16:43:18.247501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:21.563 [2024-07-24 16:43:18.247561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.563 [2024-07-24 16:43:18.247584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:26:21.563 [2024-07-24 16:43:18.247601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.563 [2024-07-24 16:43:18.250360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.563 [2024-07-24 16:43:18.250399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:21.563 [2024-07-24 16:43:18.250492] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:21.563 [2024-07-24 16:43:18.250556] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:21.563 pt1 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.563 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.822 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.822 "name": "raid_bdev1", 00:26:21.822 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:21.822 "strip_size_kb": 0, 00:26:21.822 "state": "configuring", 00:26:21.822 "raid_level": "raid1", 00:26:21.822 "superblock": true, 00:26:21.822 "num_base_bdevs": 4, 00:26:21.822 "num_base_bdevs_discovered": 1, 00:26:21.822 "num_base_bdevs_operational": 4, 00:26:21.822 "base_bdevs_list": [ 00:26:21.822 { 00:26:21.822 "name": "pt1", 00:26:21.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:21.822 "is_configured": true, 00:26:21.822 "data_offset": 2048, 00:26:21.822 "data_size": 63488 00:26:21.822 }, 00:26:21.822 { 00:26:21.822 "name": null, 00:26:21.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:21.822 "is_configured": false, 00:26:21.822 "data_offset": 2048, 00:26:21.823 "data_size": 63488 00:26:21.823 }, 00:26:21.823 { 00:26:21.823 "name": null, 00:26:21.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:21.823 "is_configured": false, 00:26:21.823 "data_offset": 2048, 00:26:21.823 "data_size": 63488 00:26:21.823 }, 00:26:21.823 { 00:26:21.823 "name": null, 00:26:21.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:21.823 "is_configured": false, 00:26:21.823 "data_offset": 2048, 00:26:21.823 "data_size": 63488 00:26:21.823 } 00:26:21.823 ] 00:26:21.823 }' 00:26:21.823 16:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.823 16:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.391 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:26:22.391 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:22.391 [2024-07-24 16:43:19.234275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:22.391 [2024-07-24 16:43:19.234341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.391 [2024-07-24 16:43:19.234366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:26:22.391 [2024-07-24 16:43:19.234384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.391 [2024-07-24 16:43:19.234952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.391 [2024-07-24 16:43:19.234980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:22.391 [2024-07-24 16:43:19.235072] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:22.391 [2024-07-24 16:43:19.235104] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:22.391 pt2 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:22.650 [2024-07-24 16:43:19.462928] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.650 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.909 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:22.909 "name": "raid_bdev1", 00:26:22.909 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:22.909 "strip_size_kb": 0, 00:26:22.910 "state": "configuring", 00:26:22.910 "raid_level": "raid1", 00:26:22.910 "superblock": true, 00:26:22.910 "num_base_bdevs": 4, 00:26:22.910 "num_base_bdevs_discovered": 1, 00:26:22.910 "num_base_bdevs_operational": 4, 00:26:22.910 "base_bdevs_list": [ 00:26:22.910 { 00:26:22.910 "name": "pt1", 00:26:22.910 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.910 "is_configured": true, 00:26:22.910 "data_offset": 2048, 00:26:22.910 "data_size": 63488 00:26:22.910 }, 00:26:22.910 { 00:26:22.910 "name": null, 00:26:22.910 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:22.910 "is_configured": false, 00:26:22.910 "data_offset": 2048, 00:26:22.910 "data_size": 63488 00:26:22.910 }, 00:26:22.910 { 00:26:22.910 "name": null, 00:26:22.910 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:22.910 "is_configured": false, 00:26:22.910 "data_offset": 2048, 00:26:22.910 "data_size": 63488 00:26:22.910 }, 00:26:22.910 { 00:26:22.910 "name": null, 00:26:22.910 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:22.910 "is_configured": false, 00:26:22.910 "data_offset": 2048, 00:26:22.910 "data_size": 63488 00:26:22.910 } 00:26:22.910 ] 00:26:22.910 }' 00:26:22.910 16:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:22.910 16:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.478 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:23.478 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:23.478 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:23.737 [2024-07-24 16:43:20.485656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:23.737 [2024-07-24 16:43:20.485719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.737 [2024-07-24 16:43:20.485751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:26:23.737 [2024-07-24 16:43:20.485767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.737 [2024-07-24 16:43:20.486333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.738 [2024-07-24 16:43:20.486359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:23.738 [2024-07-24 16:43:20.486460] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:23.738 [2024-07-24 16:43:20.486486] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:23.738 pt2 00:26:23.738 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:23.738 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:23.738 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:23.997 [2024-07-24 16:43:20.710296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:23.997 [2024-07-24 16:43:20.710352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.997 [2024-07-24 16:43:20.710378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:26:23.997 [2024-07-24 16:43:20.710394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.997 [2024-07-24 16:43:20.710985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.997 [2024-07-24 16:43:20.711012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:23.997 [2024-07-24 16:43:20.711108] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:23.997 [2024-07-24 16:43:20.711134] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:23.997 pt3 00:26:23.997 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:23.997 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:23.997 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:24.257 [2024-07-24 16:43:20.922876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:24.257 [2024-07-24 16:43:20.922931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.257 [2024-07-24 16:43:20.922960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:26:24.257 [2024-07-24 16:43:20.922975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.257 [2024-07-24 16:43:20.923512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.257 [2024-07-24 16:43:20.923538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:24.257 [2024-07-24 16:43:20.923634] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:24.257 [2024-07-24 16:43:20.923662] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:24.257 [2024-07-24 16:43:20.923875] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:26:24.257 [2024-07-24 16:43:20.923889] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:24.257 [2024-07-24 16:43:20.924211] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:26:24.257 [2024-07-24 16:43:20.924462] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:26:24.257 [2024-07-24 16:43:20.924480] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:26:24.257 [2024-07-24 16:43:20.924661] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.257 pt4 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.257 16:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.516 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.516 "name": "raid_bdev1", 00:26:24.516 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:24.516 "strip_size_kb": 0, 00:26:24.516 "state": "online", 00:26:24.516 "raid_level": "raid1", 00:26:24.516 "superblock": true, 00:26:24.516 "num_base_bdevs": 4, 00:26:24.516 "num_base_bdevs_discovered": 4, 00:26:24.516 "num_base_bdevs_operational": 4, 00:26:24.516 "base_bdevs_list": [ 00:26:24.516 { 00:26:24.516 "name": "pt1", 00:26:24.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:24.516 "is_configured": true, 00:26:24.516 "data_offset": 2048, 00:26:24.516 "data_size": 63488 00:26:24.516 }, 00:26:24.516 { 00:26:24.516 "name": "pt2", 00:26:24.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.516 "is_configured": true, 00:26:24.516 "data_offset": 2048, 00:26:24.516 "data_size": 63488 00:26:24.516 }, 00:26:24.516 { 00:26:24.516 "name": "pt3", 00:26:24.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:24.516 "is_configured": true, 00:26:24.516 "data_offset": 2048, 00:26:24.516 "data_size": 63488 00:26:24.516 }, 00:26:24.516 { 00:26:24.516 "name": "pt4", 00:26:24.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:24.516 "is_configured": true, 00:26:24.516 "data_offset": 2048, 00:26:24.516 "data_size": 63488 00:26:24.516 } 00:26:24.516 ] 00:26:24.516 }' 00:26:24.516 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.516 16:43:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:25.085 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:25.345 [2024-07-24 16:43:21.970102] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:25.345 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:25.345 "name": "raid_bdev1", 00:26:25.345 "aliases": [ 00:26:25.345 "78543c8b-61d2-4549-abb4-4108c0ed550c" 00:26:25.345 ], 00:26:25.345 "product_name": "Raid Volume", 00:26:25.345 "block_size": 512, 00:26:25.345 "num_blocks": 63488, 00:26:25.345 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:25.345 "assigned_rate_limits": { 00:26:25.345 "rw_ios_per_sec": 0, 00:26:25.345 "rw_mbytes_per_sec": 0, 00:26:25.345 "r_mbytes_per_sec": 0, 00:26:25.345 "w_mbytes_per_sec": 0 00:26:25.345 }, 00:26:25.345 "claimed": false, 00:26:25.345 "zoned": false, 00:26:25.345 "supported_io_types": { 00:26:25.345 "read": true, 00:26:25.345 "write": true, 00:26:25.345 "unmap": false, 00:26:25.345 "flush": false, 00:26:25.345 "reset": true, 00:26:25.345 "nvme_admin": false, 00:26:25.345 "nvme_io": false, 00:26:25.345 "nvme_io_md": false, 00:26:25.345 "write_zeroes": true, 00:26:25.345 "zcopy": false, 00:26:25.345 "get_zone_info": false, 00:26:25.345 "zone_management": false, 00:26:25.345 "zone_append": false, 00:26:25.345 "compare": false, 00:26:25.345 "compare_and_write": false, 00:26:25.345 "abort": false, 00:26:25.345 "seek_hole": false, 00:26:25.345 "seek_data": false, 00:26:25.345 "copy": false, 00:26:25.345 "nvme_iov_md": false 00:26:25.345 }, 00:26:25.345 "memory_domains": [ 00:26:25.345 { 00:26:25.345 "dma_device_id": "system", 00:26:25.345 "dma_device_type": 1 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.345 "dma_device_type": 2 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "system", 00:26:25.345 "dma_device_type": 1 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.345 "dma_device_type": 2 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "system", 00:26:25.345 "dma_device_type": 1 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.345 "dma_device_type": 2 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "system", 00:26:25.345 "dma_device_type": 1 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.345 "dma_device_type": 2 00:26:25.345 } 00:26:25.345 ], 00:26:25.345 "driver_specific": { 00:26:25.345 "raid": { 00:26:25.345 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:25.345 "strip_size_kb": 0, 00:26:25.345 "state": "online", 00:26:25.345 "raid_level": "raid1", 00:26:25.345 "superblock": true, 00:26:25.345 "num_base_bdevs": 4, 00:26:25.345 "num_base_bdevs_discovered": 4, 00:26:25.345 "num_base_bdevs_operational": 4, 00:26:25.345 "base_bdevs_list": [ 00:26:25.345 { 00:26:25.345 "name": "pt1", 00:26:25.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.345 "is_configured": true, 00:26:25.345 "data_offset": 2048, 00:26:25.345 "data_size": 63488 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "name": "pt2", 00:26:25.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:25.345 "is_configured": true, 00:26:25.345 "data_offset": 2048, 00:26:25.345 "data_size": 63488 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "name": "pt3", 00:26:25.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:25.345 "is_configured": true, 00:26:25.345 "data_offset": 2048, 00:26:25.345 "data_size": 63488 00:26:25.345 }, 00:26:25.345 { 00:26:25.345 "name": "pt4", 00:26:25.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:25.345 "is_configured": true, 00:26:25.345 "data_offset": 2048, 00:26:25.345 "data_size": 63488 00:26:25.345 } 00:26:25.345 ] 00:26:25.345 } 00:26:25.345 } 00:26:25.345 }' 00:26:25.345 16:43:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:25.345 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:25.345 pt2 00:26:25.345 pt3 00:26:25.345 pt4' 00:26:25.345 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:25.345 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:25.345 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.604 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:25.605 "name": "pt1", 00:26:25.605 "aliases": [ 00:26:25.605 "00000000-0000-0000-0000-000000000001" 00:26:25.605 ], 00:26:25.605 "product_name": "passthru", 00:26:25.605 "block_size": 512, 00:26:25.605 "num_blocks": 65536, 00:26:25.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:25.605 "assigned_rate_limits": { 00:26:25.605 "rw_ios_per_sec": 0, 00:26:25.605 "rw_mbytes_per_sec": 0, 00:26:25.605 "r_mbytes_per_sec": 0, 00:26:25.605 "w_mbytes_per_sec": 0 00:26:25.605 }, 00:26:25.605 "claimed": true, 00:26:25.605 "claim_type": "exclusive_write", 00:26:25.605 "zoned": false, 00:26:25.605 "supported_io_types": { 00:26:25.605 "read": true, 00:26:25.605 "write": true, 00:26:25.605 "unmap": true, 00:26:25.605 "flush": true, 00:26:25.605 "reset": true, 00:26:25.605 "nvme_admin": false, 00:26:25.605 "nvme_io": false, 00:26:25.605 "nvme_io_md": false, 00:26:25.605 "write_zeroes": true, 00:26:25.605 "zcopy": true, 00:26:25.605 "get_zone_info": false, 00:26:25.605 "zone_management": false, 00:26:25.605 "zone_append": false, 00:26:25.605 "compare": false, 00:26:25.605 "compare_and_write": false, 00:26:25.605 "abort": true, 00:26:25.605 "seek_hole": false, 00:26:25.605 "seek_data": false, 00:26:25.605 "copy": true, 00:26:25.605 "nvme_iov_md": false 00:26:25.605 }, 00:26:25.605 "memory_domains": [ 00:26:25.605 { 00:26:25.605 "dma_device_id": "system", 00:26:25.605 "dma_device_type": 1 00:26:25.605 }, 00:26:25.605 { 00:26:25.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:25.605 "dma_device_type": 2 00:26:25.605 } 00:26:25.605 ], 00:26:25.605 "driver_specific": { 00:26:25.605 "passthru": { 00:26:25.605 "name": "pt1", 00:26:25.605 "base_bdev_name": "malloc1" 00:26:25.605 } 00:26:25.605 } 00:26:25.605 }' 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:25.605 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:25.864 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:26.123 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:26.123 "name": "pt2", 00:26:26.123 "aliases": [ 00:26:26.123 "00000000-0000-0000-0000-000000000002" 00:26:26.123 ], 00:26:26.123 "product_name": "passthru", 00:26:26.123 "block_size": 512, 00:26:26.123 "num_blocks": 65536, 00:26:26.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:26.123 "assigned_rate_limits": { 00:26:26.123 "rw_ios_per_sec": 0, 00:26:26.123 "rw_mbytes_per_sec": 0, 00:26:26.123 "r_mbytes_per_sec": 0, 00:26:26.123 "w_mbytes_per_sec": 0 00:26:26.123 }, 00:26:26.123 "claimed": true, 00:26:26.123 "claim_type": "exclusive_write", 00:26:26.123 "zoned": false, 00:26:26.123 "supported_io_types": { 00:26:26.123 "read": true, 00:26:26.123 "write": true, 00:26:26.123 "unmap": true, 00:26:26.123 "flush": true, 00:26:26.123 "reset": true, 00:26:26.123 "nvme_admin": false, 00:26:26.123 "nvme_io": false, 00:26:26.123 "nvme_io_md": false, 00:26:26.123 "write_zeroes": true, 00:26:26.123 "zcopy": true, 00:26:26.123 "get_zone_info": false, 00:26:26.123 "zone_management": false, 00:26:26.123 "zone_append": false, 00:26:26.123 "compare": false, 00:26:26.123 "compare_and_write": false, 00:26:26.123 "abort": true, 00:26:26.123 "seek_hole": false, 00:26:26.123 "seek_data": false, 00:26:26.123 "copy": true, 00:26:26.123 "nvme_iov_md": false 00:26:26.123 }, 00:26:26.123 "memory_domains": [ 00:26:26.123 { 00:26:26.123 "dma_device_id": "system", 00:26:26.123 "dma_device_type": 1 00:26:26.123 }, 00:26:26.123 { 00:26:26.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.123 "dma_device_type": 2 00:26:26.123 } 00:26:26.123 ], 00:26:26.123 "driver_specific": { 00:26:26.123 "passthru": { 00:26:26.123 "name": "pt2", 00:26:26.123 "base_bdev_name": "malloc2" 00:26:26.123 } 00:26:26.123 } 00:26:26.123 }' 00:26:26.123 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.123 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.123 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:26.123 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.123 16:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.382 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:26.383 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:26.383 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:26.642 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:26.642 "name": "pt3", 00:26:26.642 "aliases": [ 00:26:26.642 "00000000-0000-0000-0000-000000000003" 00:26:26.642 ], 00:26:26.642 "product_name": "passthru", 00:26:26.642 "block_size": 512, 00:26:26.642 "num_blocks": 65536, 00:26:26.642 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:26.642 "assigned_rate_limits": { 00:26:26.642 "rw_ios_per_sec": 0, 00:26:26.642 "rw_mbytes_per_sec": 0, 00:26:26.642 "r_mbytes_per_sec": 0, 00:26:26.642 "w_mbytes_per_sec": 0 00:26:26.642 }, 00:26:26.642 "claimed": true, 00:26:26.642 "claim_type": "exclusive_write", 00:26:26.642 "zoned": false, 00:26:26.642 "supported_io_types": { 00:26:26.642 "read": true, 00:26:26.642 "write": true, 00:26:26.642 "unmap": true, 00:26:26.642 "flush": true, 00:26:26.642 "reset": true, 00:26:26.642 "nvme_admin": false, 00:26:26.642 "nvme_io": false, 00:26:26.642 "nvme_io_md": false, 00:26:26.642 "write_zeroes": true, 00:26:26.642 "zcopy": true, 00:26:26.642 "get_zone_info": false, 00:26:26.642 "zone_management": false, 00:26:26.642 "zone_append": false, 00:26:26.642 "compare": false, 00:26:26.642 "compare_and_write": false, 00:26:26.642 "abort": true, 00:26:26.642 "seek_hole": false, 00:26:26.642 "seek_data": false, 00:26:26.642 "copy": true, 00:26:26.642 "nvme_iov_md": false 00:26:26.642 }, 00:26:26.642 "memory_domains": [ 00:26:26.642 { 00:26:26.642 "dma_device_id": "system", 00:26:26.642 "dma_device_type": 1 00:26:26.642 }, 00:26:26.642 { 00:26:26.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:26.642 "dma_device_type": 2 00:26:26.642 } 00:26:26.642 ], 00:26:26.642 "driver_specific": { 00:26:26.642 "passthru": { 00:26:26.642 "name": "pt3", 00:26:26.642 "base_bdev_name": "malloc3" 00:26:26.642 } 00:26:26.642 } 00:26:26.642 }' 00:26:26.642 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.642 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:26.642 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:26.642 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:26.901 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:27.161 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:27.161 "name": "pt4", 00:26:27.161 "aliases": [ 00:26:27.161 "00000000-0000-0000-0000-000000000004" 00:26:27.161 ], 00:26:27.161 "product_name": "passthru", 00:26:27.161 "block_size": 512, 00:26:27.161 "num_blocks": 65536, 00:26:27.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:27.161 "assigned_rate_limits": { 00:26:27.161 "rw_ios_per_sec": 0, 00:26:27.161 "rw_mbytes_per_sec": 0, 00:26:27.161 "r_mbytes_per_sec": 0, 00:26:27.161 "w_mbytes_per_sec": 0 00:26:27.161 }, 00:26:27.161 "claimed": true, 00:26:27.161 "claim_type": "exclusive_write", 00:26:27.161 "zoned": false, 00:26:27.161 "supported_io_types": { 00:26:27.161 "read": true, 00:26:27.161 "write": true, 00:26:27.161 "unmap": true, 00:26:27.161 "flush": true, 00:26:27.161 "reset": true, 00:26:27.161 "nvme_admin": false, 00:26:27.161 "nvme_io": false, 00:26:27.161 "nvme_io_md": false, 00:26:27.161 "write_zeroes": true, 00:26:27.161 "zcopy": true, 00:26:27.161 "get_zone_info": false, 00:26:27.161 "zone_management": false, 00:26:27.161 "zone_append": false, 00:26:27.161 "compare": false, 00:26:27.161 "compare_and_write": false, 00:26:27.161 "abort": true, 00:26:27.161 "seek_hole": false, 00:26:27.161 "seek_data": false, 00:26:27.161 "copy": true, 00:26:27.161 "nvme_iov_md": false 00:26:27.161 }, 00:26:27.161 "memory_domains": [ 00:26:27.161 { 00:26:27.161 "dma_device_id": "system", 00:26:27.161 "dma_device_type": 1 00:26:27.161 }, 00:26:27.161 { 00:26:27.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.161 "dma_device_type": 2 00:26:27.161 } 00:26:27.161 ], 00:26:27.161 "driver_specific": { 00:26:27.161 "passthru": { 00:26:27.161 "name": "pt4", 00:26:27.161 "base_bdev_name": "malloc4" 00:26:27.161 } 00:26:27.161 } 00:26:27.161 }' 00:26:27.161 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:27.161 16:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:27.161 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:27.161 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:27.420 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:27.680 [2024-07-24 16:43:24.388656] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:27.680 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 78543c8b-61d2-4549-abb4-4108c0ed550c '!=' 78543c8b-61d2-4549-abb4-4108c0ed550c ']' 00:26:27.680 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:27.680 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:27.680 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:27.680 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:27.940 [2024-07-24 16:43:24.616905] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.940 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.199 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.199 "name": "raid_bdev1", 00:26:28.199 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:28.199 "strip_size_kb": 0, 00:26:28.199 "state": "online", 00:26:28.199 "raid_level": "raid1", 00:26:28.199 "superblock": true, 00:26:28.199 "num_base_bdevs": 4, 00:26:28.199 "num_base_bdevs_discovered": 3, 00:26:28.199 "num_base_bdevs_operational": 3, 00:26:28.199 "base_bdevs_list": [ 00:26:28.199 { 00:26:28.199 "name": null, 00:26:28.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.199 "is_configured": false, 00:26:28.199 "data_offset": 2048, 00:26:28.199 "data_size": 63488 00:26:28.199 }, 00:26:28.199 { 00:26:28.199 "name": "pt2", 00:26:28.199 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:28.199 "is_configured": true, 00:26:28.199 "data_offset": 2048, 00:26:28.199 "data_size": 63488 00:26:28.199 }, 00:26:28.199 { 00:26:28.199 "name": "pt3", 00:26:28.199 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:28.199 "is_configured": true, 00:26:28.199 "data_offset": 2048, 00:26:28.199 "data_size": 63488 00:26:28.199 }, 00:26:28.199 { 00:26:28.199 "name": "pt4", 00:26:28.199 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:28.199 "is_configured": true, 00:26:28.199 "data_offset": 2048, 00:26:28.199 "data_size": 63488 00:26:28.199 } 00:26:28.199 ] 00:26:28.199 }' 00:26:28.199 16:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.199 16:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.767 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:29.026 [2024-07-24 16:43:25.651666] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:29.026 [2024-07-24 16:43:25.651701] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:29.026 [2024-07-24 16:43:25.651780] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:29.026 [2024-07-24 16:43:25.651870] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:29.026 [2024-07-24 16:43:25.651887] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:26:29.027 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.027 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:29.286 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:29.286 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:29.286 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:29.286 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:29.286 16:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:29.286 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:29.286 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:29.286 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:29.544 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:29.544 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:29.544 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:29.803 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:29.803 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:29.803 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:29.803 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:29.803 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:30.063 [2024-07-24 16:43:26.790673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:30.063 [2024-07-24 16:43:26.790734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.063 [2024-07-24 16:43:26.790765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:26:30.063 [2024-07-24 16:43:26.790781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.063 [2024-07-24 16:43:26.793561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.063 [2024-07-24 16:43:26.793595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:30.063 [2024-07-24 16:43:26.793695] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:30.063 [2024-07-24 16:43:26.793749] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:30.063 pt2 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.063 16:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.322 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.322 "name": "raid_bdev1", 00:26:30.322 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:30.322 "strip_size_kb": 0, 00:26:30.322 "state": "configuring", 00:26:30.322 "raid_level": "raid1", 00:26:30.322 "superblock": true, 00:26:30.322 "num_base_bdevs": 4, 00:26:30.322 "num_base_bdevs_discovered": 1, 00:26:30.322 "num_base_bdevs_operational": 3, 00:26:30.322 "base_bdevs_list": [ 00:26:30.322 { 00:26:30.322 "name": null, 00:26:30.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:30.322 "is_configured": false, 00:26:30.322 "data_offset": 2048, 00:26:30.322 "data_size": 63488 00:26:30.322 }, 00:26:30.322 { 00:26:30.322 "name": "pt2", 00:26:30.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:30.322 "is_configured": true, 00:26:30.322 "data_offset": 2048, 00:26:30.322 "data_size": 63488 00:26:30.322 }, 00:26:30.322 { 00:26:30.322 "name": null, 00:26:30.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:30.322 "is_configured": false, 00:26:30.322 "data_offset": 2048, 00:26:30.322 "data_size": 63488 00:26:30.322 }, 00:26:30.322 { 00:26:30.322 "name": null, 00:26:30.322 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:30.322 "is_configured": false, 00:26:30.322 "data_offset": 2048, 00:26:30.322 "data_size": 63488 00:26:30.322 } 00:26:30.322 ] 00:26:30.322 }' 00:26:30.322 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.322 16:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.943 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:30.943 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:30.943 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:31.215 [2024-07-24 16:43:27.813434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:31.215 [2024-07-24 16:43:27.813498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:31.215 [2024-07-24 16:43:27.813526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:26:31.215 [2024-07-24 16:43:27.813542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:31.215 [2024-07-24 16:43:27.814110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:31.215 [2024-07-24 16:43:27.814134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:31.215 [2024-07-24 16:43:27.814239] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:31.215 [2024-07-24 16:43:27.814265] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:31.215 pt3 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.215 16:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.215 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.215 "name": "raid_bdev1", 00:26:31.215 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:31.215 "strip_size_kb": 0, 00:26:31.215 "state": "configuring", 00:26:31.215 "raid_level": "raid1", 00:26:31.215 "superblock": true, 00:26:31.215 "num_base_bdevs": 4, 00:26:31.215 "num_base_bdevs_discovered": 2, 00:26:31.215 "num_base_bdevs_operational": 3, 00:26:31.215 "base_bdevs_list": [ 00:26:31.215 { 00:26:31.215 "name": null, 00:26:31.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.215 "is_configured": false, 00:26:31.215 "data_offset": 2048, 00:26:31.215 "data_size": 63488 00:26:31.215 }, 00:26:31.215 { 00:26:31.215 "name": "pt2", 00:26:31.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:31.215 "is_configured": true, 00:26:31.215 "data_offset": 2048, 00:26:31.215 "data_size": 63488 00:26:31.215 }, 00:26:31.215 { 00:26:31.215 "name": "pt3", 00:26:31.215 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:31.215 "is_configured": true, 00:26:31.215 "data_offset": 2048, 00:26:31.215 "data_size": 63488 00:26:31.215 }, 00:26:31.215 { 00:26:31.215 "name": null, 00:26:31.215 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:31.215 "is_configured": false, 00:26:31.215 "data_offset": 2048, 00:26:31.215 "data_size": 63488 00:26:31.215 } 00:26:31.215 ] 00:26:31.215 }' 00:26:31.215 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.215 16:43:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.151 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:32.151 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:32.151 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:32.151 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:32.151 [2024-07-24 16:43:28.856294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:32.152 [2024-07-24 16:43:28.856358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:32.152 [2024-07-24 16:43:28.856385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044d80 00:26:32.152 [2024-07-24 16:43:28.856401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:32.152 [2024-07-24 16:43:28.856969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:32.152 [2024-07-24 16:43:28.856994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:32.152 [2024-07-24 16:43:28.857091] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:32.152 [2024-07-24 16:43:28.857117] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:32.152 [2024-07-24 16:43:28.857327] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000044780 00:26:32.152 [2024-07-24 16:43:28.857342] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:32.152 [2024-07-24 16:43:28.857668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:26:32.152 [2024-07-24 16:43:28.857896] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000044780 00:26:32.152 [2024-07-24 16:43:28.857913] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000044780 00:26:32.152 [2024-07-24 16:43:28.858086] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.152 pt4 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.152 16:43:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.411 16:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.411 "name": "raid_bdev1", 00:26:32.411 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:32.411 "strip_size_kb": 0, 00:26:32.411 "state": "online", 00:26:32.411 "raid_level": "raid1", 00:26:32.411 "superblock": true, 00:26:32.411 "num_base_bdevs": 4, 00:26:32.411 "num_base_bdevs_discovered": 3, 00:26:32.411 "num_base_bdevs_operational": 3, 00:26:32.411 "base_bdevs_list": [ 00:26:32.411 { 00:26:32.411 "name": null, 00:26:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.411 "is_configured": false, 00:26:32.411 "data_offset": 2048, 00:26:32.411 "data_size": 63488 00:26:32.411 }, 00:26:32.411 { 00:26:32.411 "name": "pt2", 00:26:32.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.411 "is_configured": true, 00:26:32.411 "data_offset": 2048, 00:26:32.411 "data_size": 63488 00:26:32.411 }, 00:26:32.411 { 00:26:32.411 "name": "pt3", 00:26:32.411 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.411 "is_configured": true, 00:26:32.411 "data_offset": 2048, 00:26:32.411 "data_size": 63488 00:26:32.411 }, 00:26:32.411 { 00:26:32.411 "name": "pt4", 00:26:32.411 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.411 "is_configured": true, 00:26:32.411 "data_offset": 2048, 00:26:32.411 "data_size": 63488 00:26:32.411 } 00:26:32.411 ] 00:26:32.411 }' 00:26:32.411 16:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.411 16:43:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.979 16:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:33.238 [2024-07-24 16:43:29.871056] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:33.238 [2024-07-24 16:43:29.871088] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:33.238 [2024-07-24 16:43:29.871181] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:33.238 [2024-07-24 16:43:29.871267] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:33.238 [2024-07-24 16:43:29.871290] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000044780 name raid_bdev1, state offline 00:26:33.238 16:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.238 16:43:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:33.496 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:33.496 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:33.496 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:26:33.496 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:26:33.496 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:33.496 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:33.755 [2024-07-24 16:43:30.560872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:33.755 [2024-07-24 16:43:30.560941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:33.755 [2024-07-24 16:43:30.560966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:26:33.755 [2024-07-24 16:43:30.560984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:33.755 [2024-07-24 16:43:30.563756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:33.755 [2024-07-24 16:43:30.563795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:33.755 [2024-07-24 16:43:30.563884] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:33.755 [2024-07-24 16:43:30.563948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:33.755 [2024-07-24 16:43:30.564111] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:33.755 [2024-07-24 16:43:30.564136] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:33.755 [2024-07-24 16:43:30.564164] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045680 name raid_bdev1, state configuring 00:26:33.755 [2024-07-24 16:43:30.564232] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:33.755 [2024-07-24 16:43:30.564351] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:33.755 pt1 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.755 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.014 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.014 "name": "raid_bdev1", 00:26:34.014 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:34.014 "strip_size_kb": 0, 00:26:34.014 "state": "configuring", 00:26:34.014 "raid_level": "raid1", 00:26:34.014 "superblock": true, 00:26:34.015 "num_base_bdevs": 4, 00:26:34.015 "num_base_bdevs_discovered": 2, 00:26:34.015 "num_base_bdevs_operational": 3, 00:26:34.015 "base_bdevs_list": [ 00:26:34.015 { 00:26:34.015 "name": null, 00:26:34.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.015 "is_configured": false, 00:26:34.015 "data_offset": 2048, 00:26:34.015 "data_size": 63488 00:26:34.015 }, 00:26:34.015 { 00:26:34.015 "name": "pt2", 00:26:34.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:34.015 "is_configured": true, 00:26:34.015 "data_offset": 2048, 00:26:34.015 "data_size": 63488 00:26:34.015 }, 00:26:34.015 { 00:26:34.015 "name": "pt3", 00:26:34.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:34.015 "is_configured": true, 00:26:34.015 "data_offset": 2048, 00:26:34.015 "data_size": 63488 00:26:34.015 }, 00:26:34.015 { 00:26:34.015 "name": null, 00:26:34.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:34.015 "is_configured": false, 00:26:34.015 "data_offset": 2048, 00:26:34.015 "data_size": 63488 00:26:34.015 } 00:26:34.015 ] 00:26:34.015 }' 00:26:34.015 16:43:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.015 16:43:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.583 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:34.583 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:34.842 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:26:34.842 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:35.117 [2024-07-24 16:43:31.832310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:35.117 [2024-07-24 16:43:31.832374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:35.117 [2024-07-24 16:43:31.832402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045c80 00:26:35.117 [2024-07-24 16:43:31.832418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:35.117 [2024-07-24 16:43:31.833004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:35.117 [2024-07-24 16:43:31.833029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:35.118 [2024-07-24 16:43:31.833121] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:35.118 [2024-07-24 16:43:31.833159] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:35.118 [2024-07-24 16:43:31.833343] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045980 00:26:35.118 [2024-07-24 16:43:31.833357] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:35.118 [2024-07-24 16:43:31.833656] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:26:35.118 [2024-07-24 16:43:31.833888] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045980 00:26:35.118 [2024-07-24 16:43:31.833905] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045980 00:26:35.118 [2024-07-24 16:43:31.834081] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.118 pt4 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.118 16:43:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.376 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:35.376 "name": "raid_bdev1", 00:26:35.376 "uuid": "78543c8b-61d2-4549-abb4-4108c0ed550c", 00:26:35.376 "strip_size_kb": 0, 00:26:35.376 "state": "online", 00:26:35.376 "raid_level": "raid1", 00:26:35.376 "superblock": true, 00:26:35.376 "num_base_bdevs": 4, 00:26:35.376 "num_base_bdevs_discovered": 3, 00:26:35.376 "num_base_bdevs_operational": 3, 00:26:35.376 "base_bdevs_list": [ 00:26:35.376 { 00:26:35.376 "name": null, 00:26:35.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.376 "is_configured": false, 00:26:35.376 "data_offset": 2048, 00:26:35.376 "data_size": 63488 00:26:35.376 }, 00:26:35.376 { 00:26:35.376 "name": "pt2", 00:26:35.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:35.376 "is_configured": true, 00:26:35.376 "data_offset": 2048, 00:26:35.376 "data_size": 63488 00:26:35.376 }, 00:26:35.376 { 00:26:35.376 "name": "pt3", 00:26:35.376 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:35.376 "is_configured": true, 00:26:35.376 "data_offset": 2048, 00:26:35.376 "data_size": 63488 00:26:35.376 }, 00:26:35.376 { 00:26:35.376 "name": "pt4", 00:26:35.376 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:35.376 "is_configured": true, 00:26:35.376 "data_offset": 2048, 00:26:35.376 "data_size": 63488 00:26:35.376 } 00:26:35.376 ] 00:26:35.376 }' 00:26:35.376 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:35.376 16:43:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.942 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:35.942 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:36.201 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:36.202 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:36.202 16:43:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:36.461 [2024-07-24 16:43:33.100130] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 78543c8b-61d2-4549-abb4-4108c0ed550c '!=' 78543c8b-61d2-4549-abb4-4108c0ed550c ']' 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 1731532 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 1731532 ']' 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 1731532 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1731532 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1731532' 00:26:36.461 killing process with pid 1731532 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 1731532 00:26:36.461 [2024-07-24 16:43:33.178915] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:36.461 [2024-07-24 16:43:33.179016] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:36.461 16:43:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 1731532 00:26:36.461 [2024-07-24 16:43:33.179105] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:36.461 [2024-07-24 16:43:33.179130] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045980 name raid_bdev1, state offline 00:26:37.035 [2024-07-24 16:43:33.636153] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:38.943 16:43:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:26:38.943 00:26:38.943 real 0m25.861s 00:26:38.943 user 0m45.100s 00:26:38.943 sys 0m4.547s 00:26:38.943 16:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.943 16:43:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.943 ************************************ 00:26:38.943 END TEST raid_superblock_test 00:26:38.943 ************************************ 00:26:38.943 16:43:35 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:38.943 16:43:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:38.943 16:43:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.943 16:43:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:38.943 ************************************ 00:26:38.943 START TEST raid_read_error_test 00:26:38.943 ************************************ 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.MoWSfOAdrs 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1736287 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1736287 /var/tmp/spdk-raid.sock 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 1736287 ']' 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:38.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.943 16:43:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.943 [2024-07-24 16:43:35.572310] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:26:38.943 [2024-07-24 16:43:35.572432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736287 ] 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:38.943 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:38.943 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:38.943 [2024-07-24 16:43:35.797642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.512 [2024-07-24 16:43:36.087315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.771 [2024-07-24 16:43:36.411916] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.771 [2024-07-24 16:43:36.411958] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.771 16:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.771 16:43:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:39.771 16:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:39.771 16:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:40.030 BaseBdev1_malloc 00:26:40.030 16:43:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:40.289 true 00:26:40.289 16:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:40.548 [2024-07-24 16:43:37.289339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:40.549 [2024-07-24 16:43:37.289401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:40.549 [2024-07-24 16:43:37.289427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:26:40.549 [2024-07-24 16:43:37.289449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:40.549 [2024-07-24 16:43:37.292243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:40.549 [2024-07-24 16:43:37.292283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:40.549 BaseBdev1 00:26:40.549 16:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:40.549 16:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:40.807 BaseBdev2_malloc 00:26:40.807 16:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:41.066 true 00:26:41.066 16:43:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:41.325 [2024-07-24 16:43:38.008797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:41.325 [2024-07-24 16:43:38.008857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:41.325 [2024-07-24 16:43:38.008883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:26:41.325 [2024-07-24 16:43:38.008904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:41.325 [2024-07-24 16:43:38.011631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:41.325 [2024-07-24 16:43:38.011669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:41.325 BaseBdev2 00:26:41.325 16:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:41.325 16:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:41.584 BaseBdev3_malloc 00:26:41.584 16:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:41.843 true 00:26:41.843 16:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:42.103 [2024-07-24 16:43:38.739267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:42.103 [2024-07-24 16:43:38.739325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.103 [2024-07-24 16:43:38.739352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:26:42.103 [2024-07-24 16:43:38.739370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.103 [2024-07-24 16:43:38.742161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.103 [2024-07-24 16:43:38.742201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:42.103 BaseBdev3 00:26:42.103 16:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:42.103 16:43:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:42.362 BaseBdev4_malloc 00:26:42.362 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:42.621 true 00:26:42.621 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:42.621 [2024-07-24 16:43:39.467978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:42.621 [2024-07-24 16:43:39.468044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.621 [2024-07-24 16:43:39.468074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:26:42.621 [2024-07-24 16:43:39.468092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.621 [2024-07-24 16:43:39.470897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.621 [2024-07-24 16:43:39.470936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:42.621 BaseBdev4 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:42.881 [2024-07-24 16:43:39.692631] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:42.881 [2024-07-24 16:43:39.694984] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:42.881 [2024-07-24 16:43:39.695086] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:42.881 [2024-07-24 16:43:39.695177] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:42.881 [2024-07-24 16:43:39.695478] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:26:42.881 [2024-07-24 16:43:39.695499] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:42.881 [2024-07-24 16:43:39.695836] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:42.881 [2024-07-24 16:43:39.696111] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:26:42.881 [2024-07-24 16:43:39.696125] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:26:42.881 [2024-07-24 16:43:39.696348] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:42.881 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.140 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.140 "name": "raid_bdev1", 00:26:43.140 "uuid": "a1fe83f8-0006-4cfe-b817-5b4fae82ad63", 00:26:43.140 "strip_size_kb": 0, 00:26:43.140 "state": "online", 00:26:43.140 "raid_level": "raid1", 00:26:43.140 "superblock": true, 00:26:43.140 "num_base_bdevs": 4, 00:26:43.140 "num_base_bdevs_discovered": 4, 00:26:43.140 "num_base_bdevs_operational": 4, 00:26:43.140 "base_bdevs_list": [ 00:26:43.140 { 00:26:43.140 "name": "BaseBdev1", 00:26:43.140 "uuid": "56eb363a-37f6-59b7-a452-315b7007724a", 00:26:43.140 "is_configured": true, 00:26:43.140 "data_offset": 2048, 00:26:43.140 "data_size": 63488 00:26:43.140 }, 00:26:43.140 { 00:26:43.141 "name": "BaseBdev2", 00:26:43.141 "uuid": "114ed70b-10be-5474-add9-1e8661d8456a", 00:26:43.141 "is_configured": true, 00:26:43.141 "data_offset": 2048, 00:26:43.141 "data_size": 63488 00:26:43.141 }, 00:26:43.141 { 00:26:43.141 "name": "BaseBdev3", 00:26:43.141 "uuid": "8f269044-d772-5839-b078-7e5903a344f9", 00:26:43.141 "is_configured": true, 00:26:43.141 "data_offset": 2048, 00:26:43.141 "data_size": 63488 00:26:43.141 }, 00:26:43.141 { 00:26:43.141 "name": "BaseBdev4", 00:26:43.141 "uuid": "8b568bcd-c045-5687-8986-88bcb444a3de", 00:26:43.141 "is_configured": true, 00:26:43.141 "data_offset": 2048, 00:26:43.141 "data_size": 63488 00:26:43.141 } 00:26:43.141 ] 00:26:43.141 }' 00:26:43.141 16:43:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.141 16:43:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:43.709 16:43:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:26:43.709 16:43:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:43.969 [2024-07-24 16:43:40.620978] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.975 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.235 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:45.235 "name": "raid_bdev1", 00:26:45.235 "uuid": "a1fe83f8-0006-4cfe-b817-5b4fae82ad63", 00:26:45.235 "strip_size_kb": 0, 00:26:45.235 "state": "online", 00:26:45.235 "raid_level": "raid1", 00:26:45.235 "superblock": true, 00:26:45.235 "num_base_bdevs": 4, 00:26:45.235 "num_base_bdevs_discovered": 4, 00:26:45.235 "num_base_bdevs_operational": 4, 00:26:45.235 "base_bdevs_list": [ 00:26:45.235 { 00:26:45.235 "name": "BaseBdev1", 00:26:45.235 "uuid": "56eb363a-37f6-59b7-a452-315b7007724a", 00:26:45.235 "is_configured": true, 00:26:45.235 "data_offset": 2048, 00:26:45.235 "data_size": 63488 00:26:45.235 }, 00:26:45.235 { 00:26:45.235 "name": "BaseBdev2", 00:26:45.235 "uuid": "114ed70b-10be-5474-add9-1e8661d8456a", 00:26:45.235 "is_configured": true, 00:26:45.235 "data_offset": 2048, 00:26:45.235 "data_size": 63488 00:26:45.235 }, 00:26:45.235 { 00:26:45.235 "name": "BaseBdev3", 00:26:45.235 "uuid": "8f269044-d772-5839-b078-7e5903a344f9", 00:26:45.235 "is_configured": true, 00:26:45.235 "data_offset": 2048, 00:26:45.235 "data_size": 63488 00:26:45.235 }, 00:26:45.235 { 00:26:45.235 "name": "BaseBdev4", 00:26:45.235 "uuid": "8b568bcd-c045-5687-8986-88bcb444a3de", 00:26:45.235 "is_configured": true, 00:26:45.235 "data_offset": 2048, 00:26:45.235 "data_size": 63488 00:26:45.235 } 00:26:45.235 ] 00:26:45.235 }' 00:26:45.235 16:43:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:45.235 16:43:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.804 16:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:46.064 [2024-07-24 16:43:42.764363] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:46.064 [2024-07-24 16:43:42.764407] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:46.064 [2024-07-24 16:43:42.767816] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:46.064 [2024-07-24 16:43:42.767877] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.064 [2024-07-24 16:43:42.768024] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:46.064 [2024-07-24 16:43:42.768050] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:26:46.064 0 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1736287 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 1736287 ']' 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 1736287 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1736287 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1736287' 00:26:46.064 killing process with pid 1736287 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 1736287 00:26:46.064 [2024-07-24 16:43:42.841896] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:46.064 16:43:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 1736287 00:26:46.324 [2024-07-24 16:43:43.181038] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.MoWSfOAdrs 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:48.230 00:26:48.230 real 0m9.537s 00:26:48.230 user 0m13.673s 00:26:48.230 sys 0m1.454s 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:48.230 16:43:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.230 ************************************ 00:26:48.230 END TEST raid_read_error_test 00:26:48.230 ************************************ 00:26:48.230 16:43:45 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:48.230 16:43:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:48.230 16:43:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:48.230 16:43:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:48.230 ************************************ 00:26:48.230 START TEST raid_write_error_test 00:26:48.230 ************************************ 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:26:48.230 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:26:48.231 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.7I5O5DyCea 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=1737979 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 1737979 /var/tmp/spdk-raid.sock 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 1737979 ']' 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:48.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.490 16:43:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.490 [2024-07-24 16:43:45.202248] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:26:48.490 [2024-07-24 16:43:45.202373] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737979 ] 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:48.490 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:48.490 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:48.749 [2024-07-24 16:43:45.432491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.007 [2024-07-24 16:43:45.698458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.266 [2024-07-24 16:43:46.043069] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:49.266 [2024-07-24 16:43:46.043108] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:49.524 16:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.524 16:43:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:49.524 16:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:49.524 16:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:49.782 BaseBdev1_malloc 00:26:49.782 16:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:50.041 true 00:26:50.041 16:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:50.299 [2024-07-24 16:43:46.936510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:50.299 [2024-07-24 16:43:46.936574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.299 [2024-07-24 16:43:46.936599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:26:50.300 [2024-07-24 16:43:46.936621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.300 [2024-07-24 16:43:46.939286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.300 [2024-07-24 16:43:46.939326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:50.300 BaseBdev1 00:26:50.300 16:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:50.300 16:43:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:50.559 BaseBdev2_malloc 00:26:50.559 16:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:50.819 true 00:26:50.819 16:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:50.819 [2024-07-24 16:43:47.671305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:50.819 [2024-07-24 16:43:47.671368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.819 [2024-07-24 16:43:47.671393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:26:50.819 [2024-07-24 16:43:47.671414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.819 [2024-07-24 16:43:47.674146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.819 [2024-07-24 16:43:47.674184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:50.819 BaseBdev2 00:26:51.078 16:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:51.078 16:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:51.337 BaseBdev3_malloc 00:26:51.337 16:43:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:51.337 true 00:26:51.337 16:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:51.596 [2024-07-24 16:43:48.401821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:51.596 [2024-07-24 16:43:48.401877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.596 [2024-07-24 16:43:48.401901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:26:51.596 [2024-07-24 16:43:48.401919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.596 [2024-07-24 16:43:48.404645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.596 [2024-07-24 16:43:48.404682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:51.596 BaseBdev3 00:26:51.596 16:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:51.597 16:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:51.855 BaseBdev4_malloc 00:26:51.855 16:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:52.114 true 00:26:52.114 16:43:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:52.373 [2024-07-24 16:43:49.136131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:52.373 [2024-07-24 16:43:49.136203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.373 [2024-07-24 16:43:49.136229] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:26:52.373 [2024-07-24 16:43:49.136247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.373 [2024-07-24 16:43:49.138962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.373 [2024-07-24 16:43:49.138998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:52.373 BaseBdev4 00:26:52.373 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:52.632 [2024-07-24 16:43:49.364776] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:52.632 [2024-07-24 16:43:49.367076] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:52.632 [2024-07-24 16:43:49.367184] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:52.632 [2024-07-24 16:43:49.367264] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:52.632 [2024-07-24 16:43:49.367549] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:26:52.632 [2024-07-24 16:43:49.367571] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:52.632 [2024-07-24 16:43:49.367908] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:52.632 [2024-07-24 16:43:49.368188] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:26:52.632 [2024-07-24 16:43:49.368205] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:26:52.632 [2024-07-24 16:43:49.368393] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.632 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.891 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.891 "name": "raid_bdev1", 00:26:52.891 "uuid": "40ca0aa5-5a90-4735-b290-5e89ee1bdd7a", 00:26:52.891 "strip_size_kb": 0, 00:26:52.891 "state": "online", 00:26:52.891 "raid_level": "raid1", 00:26:52.891 "superblock": true, 00:26:52.891 "num_base_bdevs": 4, 00:26:52.891 "num_base_bdevs_discovered": 4, 00:26:52.891 "num_base_bdevs_operational": 4, 00:26:52.891 "base_bdevs_list": [ 00:26:52.891 { 00:26:52.891 "name": "BaseBdev1", 00:26:52.891 "uuid": "d1b79e31-bf04-5252-bdbd-1b73f35417f3", 00:26:52.891 "is_configured": true, 00:26:52.891 "data_offset": 2048, 00:26:52.891 "data_size": 63488 00:26:52.891 }, 00:26:52.891 { 00:26:52.891 "name": "BaseBdev2", 00:26:52.891 "uuid": "24bf53c6-3b8a-58e4-9b04-062f85af3ce5", 00:26:52.891 "is_configured": true, 00:26:52.891 "data_offset": 2048, 00:26:52.891 "data_size": 63488 00:26:52.891 }, 00:26:52.891 { 00:26:52.891 "name": "BaseBdev3", 00:26:52.891 "uuid": "a0afc9b8-1d6a-5df1-9fd7-2721c9e637ba", 00:26:52.891 "is_configured": true, 00:26:52.891 "data_offset": 2048, 00:26:52.891 "data_size": 63488 00:26:52.891 }, 00:26:52.891 { 00:26:52.891 "name": "BaseBdev4", 00:26:52.891 "uuid": "c47afa7d-3791-52d7-ae64-bb65e1f69f8a", 00:26:52.891 "is_configured": true, 00:26:52.891 "data_offset": 2048, 00:26:52.891 "data_size": 63488 00:26:52.891 } 00:26:52.891 ] 00:26:52.891 }' 00:26:52.891 16:43:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.891 16:43:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.459 16:43:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:26:53.459 16:43:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:53.459 [2024-07-24 16:43:50.289431] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:26:54.424 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:54.683 [2024-07-24 16:43:51.405025] bdev_raid.c:2247:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:54.683 [2024-07-24 16:43:51.405092] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:54.683 [2024-07-24 16:43:51.405357] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.683 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.943 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.943 "name": "raid_bdev1", 00:26:54.943 "uuid": "40ca0aa5-5a90-4735-b290-5e89ee1bdd7a", 00:26:54.943 "strip_size_kb": 0, 00:26:54.943 "state": "online", 00:26:54.943 "raid_level": "raid1", 00:26:54.943 "superblock": true, 00:26:54.943 "num_base_bdevs": 4, 00:26:54.943 "num_base_bdevs_discovered": 3, 00:26:54.943 "num_base_bdevs_operational": 3, 00:26:54.943 "base_bdevs_list": [ 00:26:54.943 { 00:26:54.943 "name": null, 00:26:54.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.943 "is_configured": false, 00:26:54.943 "data_offset": 2048, 00:26:54.943 "data_size": 63488 00:26:54.943 }, 00:26:54.943 { 00:26:54.943 "name": "BaseBdev2", 00:26:54.943 "uuid": "24bf53c6-3b8a-58e4-9b04-062f85af3ce5", 00:26:54.943 "is_configured": true, 00:26:54.943 "data_offset": 2048, 00:26:54.943 "data_size": 63488 00:26:54.943 }, 00:26:54.943 { 00:26:54.943 "name": "BaseBdev3", 00:26:54.943 "uuid": "a0afc9b8-1d6a-5df1-9fd7-2721c9e637ba", 00:26:54.943 "is_configured": true, 00:26:54.943 "data_offset": 2048, 00:26:54.943 "data_size": 63488 00:26:54.943 }, 00:26:54.943 { 00:26:54.943 "name": "BaseBdev4", 00:26:54.943 "uuid": "c47afa7d-3791-52d7-ae64-bb65e1f69f8a", 00:26:54.943 "is_configured": true, 00:26:54.943 "data_offset": 2048, 00:26:54.943 "data_size": 63488 00:26:54.943 } 00:26:54.943 ] 00:26:54.943 }' 00:26:54.943 16:43:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.943 16:43:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.511 16:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:56.079 [2024-07-24 16:43:52.727323] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:56.079 [2024-07-24 16:43:52.727370] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.079 [2024-07-24 16:43:52.730722] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.079 [2024-07-24 16:43:52.730779] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.079 [2024-07-24 16:43:52.730919] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:56.079 [2024-07-24 16:43:52.730936] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:26:56.079 0 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 1737979 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 1737979 ']' 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 1737979 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1737979 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1737979' 00:26:56.079 killing process with pid 1737979 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 1737979 00:26:56.079 16:43:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 1737979 00:26:56.079 [2024-07-24 16:43:52.816718] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:56.338 [2024-07-24 16:43:53.187217] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.241 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.7I5O5DyCea 00:26:58.241 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:58.242 00:26:58.242 real 0m9.889s 00:26:58.242 user 0m14.287s 00:26:58.242 sys 0m1.528s 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.242 16:43:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.242 ************************************ 00:26:58.242 END TEST raid_write_error_test 00:26:58.242 ************************************ 00:26:58.242 16:43:55 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:26:58.242 16:43:55 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:26:58.242 16:43:55 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:58.242 16:43:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:58.242 16:43:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.242 16:43:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:58.242 ************************************ 00:26:58.242 START TEST raid_rebuild_test 00:26:58.242 ************************************ 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=1739665 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 1739665 /var/tmp/spdk-raid.sock 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 1739665 ']' 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:58.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.242 16:43:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:58.500 [2024-07-24 16:43:55.152150] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:26:58.501 [2024-07-24 16:43:55.152271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1739665 ] 00:26:58.501 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:58.501 Zero copy mechanism will not be used. 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:58.501 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:58.501 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:58.775 [2024-07-24 16:43:55.378019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.050 [2024-07-24 16:43:55.656631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.308 [2024-07-24 16:43:55.981031] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.308 [2024-07-24 16:43:55.981067] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.566 16:43:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.566 16:43:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:26:59.566 16:43:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:59.566 16:43:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:59.824 BaseBdev1_malloc 00:26:59.824 16:43:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:00.390 [2024-07-24 16:43:57.048683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:00.390 [2024-07-24 16:43:57.048749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.390 [2024-07-24 16:43:57.048781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:00.390 [2024-07-24 16:43:57.048804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.390 [2024-07-24 16:43:57.051544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.390 [2024-07-24 16:43:57.051584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:00.390 BaseBdev1 00:27:00.390 16:43:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:00.390 16:43:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:00.649 BaseBdev2_malloc 00:27:00.649 16:43:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:01.215 [2024-07-24 16:43:57.828842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:01.215 [2024-07-24 16:43:57.828908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.215 [2024-07-24 16:43:57.828937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:01.215 [2024-07-24 16:43:57.828958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.215 [2024-07-24 16:43:57.831747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.215 [2024-07-24 16:43:57.831785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:01.215 BaseBdev2 00:27:01.215 16:43:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:01.473 spare_malloc 00:27:01.473 16:43:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:02.039 spare_delay 00:27:02.039 16:43:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:02.039 [2024-07-24 16:43:58.853889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:02.039 [2024-07-24 16:43:58.853952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.039 [2024-07-24 16:43:58.853980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:02.039 [2024-07-24 16:43:58.853999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.039 [2024-07-24 16:43:58.856804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.039 [2024-07-24 16:43:58.856843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:02.039 spare 00:27:02.039 16:43:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:02.606 [2024-07-24 16:43:59.355238] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:02.606 [2024-07-24 16:43:59.357595] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:02.606 [2024-07-24 16:43:59.357701] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:27:02.606 [2024-07-24 16:43:59.357721] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:02.606 [2024-07-24 16:43:59.358100] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:02.606 [2024-07-24 16:43:59.358358] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:27:02.606 [2024-07-24 16:43:59.358380] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:27:02.606 [2024-07-24 16:43:59.358611] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.606 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.865 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.865 "name": "raid_bdev1", 00:27:02.865 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:02.865 "strip_size_kb": 0, 00:27:02.865 "state": "online", 00:27:02.865 "raid_level": "raid1", 00:27:02.865 "superblock": false, 00:27:02.865 "num_base_bdevs": 2, 00:27:02.865 "num_base_bdevs_discovered": 2, 00:27:02.865 "num_base_bdevs_operational": 2, 00:27:02.865 "base_bdevs_list": [ 00:27:02.865 { 00:27:02.865 "name": "BaseBdev1", 00:27:02.865 "uuid": "7e3e2723-d434-54b2-8f58-5da43c843fa7", 00:27:02.865 "is_configured": true, 00:27:02.865 "data_offset": 0, 00:27:02.865 "data_size": 65536 00:27:02.865 }, 00:27:02.865 { 00:27:02.865 "name": "BaseBdev2", 00:27:02.865 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:02.865 "is_configured": true, 00:27:02.865 "data_offset": 0, 00:27:02.865 "data_size": 65536 00:27:02.865 } 00:27:02.865 ] 00:27:02.865 }' 00:27:02.865 16:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.865 16:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.432 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:03.432 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:03.689 [2024-07-24 16:44:00.398433] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.690 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:27:03.690 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.690 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:03.948 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:04.206 [2024-07-24 16:44:00.843331] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:04.206 /dev/nbd0 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:04.206 1+0 records in 00:27:04.206 1+0 records out 00:27:04.206 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259471 s, 15.8 MB/s 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:04.206 16:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:09.465 65536+0 records in 00:27:09.465 65536+0 records out 00:27:09.465 33554432 bytes (34 MB, 32 MiB) copied, 4.6935 s, 7.1 MB/s 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:09.465 [2024-07-24 16:44:05.842271] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:09.465 16:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:09.465 [2024-07-24 16:44:06.062971] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:09.465 "name": "raid_bdev1", 00:27:09.465 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:09.465 "strip_size_kb": 0, 00:27:09.465 "state": "online", 00:27:09.465 "raid_level": "raid1", 00:27:09.465 "superblock": false, 00:27:09.465 "num_base_bdevs": 2, 00:27:09.465 "num_base_bdevs_discovered": 1, 00:27:09.465 "num_base_bdevs_operational": 1, 00:27:09.465 "base_bdevs_list": [ 00:27:09.465 { 00:27:09.465 "name": null, 00:27:09.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.465 "is_configured": false, 00:27:09.465 "data_offset": 0, 00:27:09.465 "data_size": 65536 00:27:09.465 }, 00:27:09.465 { 00:27:09.465 "name": "BaseBdev2", 00:27:09.465 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:09.465 "is_configured": true, 00:27:09.465 "data_offset": 0, 00:27:09.465 "data_size": 65536 00:27:09.465 } 00:27:09.465 ] 00:27:09.465 }' 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:09.465 16:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.030 16:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:10.595 [2024-07-24 16:44:07.354641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:10.595 [2024-07-24 16:44:07.382345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14400 00:27:10.595 [2024-07-24 16:44:07.384656] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:10.595 16:44:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.968 "name": "raid_bdev1", 00:27:11.968 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:11.968 "strip_size_kb": 0, 00:27:11.968 "state": "online", 00:27:11.968 "raid_level": "raid1", 00:27:11.968 "superblock": false, 00:27:11.968 "num_base_bdevs": 2, 00:27:11.968 "num_base_bdevs_discovered": 2, 00:27:11.968 "num_base_bdevs_operational": 2, 00:27:11.968 "process": { 00:27:11.968 "type": "rebuild", 00:27:11.968 "target": "spare", 00:27:11.968 "progress": { 00:27:11.968 "blocks": 24576, 00:27:11.968 "percent": 37 00:27:11.968 } 00:27:11.968 }, 00:27:11.968 "base_bdevs_list": [ 00:27:11.968 { 00:27:11.968 "name": "spare", 00:27:11.968 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:11.968 "is_configured": true, 00:27:11.968 "data_offset": 0, 00:27:11.968 "data_size": 65536 00:27:11.968 }, 00:27:11.968 { 00:27:11.968 "name": "BaseBdev2", 00:27:11.968 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:11.968 "is_configured": true, 00:27:11.968 "data_offset": 0, 00:27:11.968 "data_size": 65536 00:27:11.968 } 00:27:11.968 ] 00:27:11.968 }' 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.968 16:44:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:12.534 [2024-07-24 16:44:09.199299] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:12.534 [2024-07-24 16:44:09.300223] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:12.534 [2024-07-24 16:44:09.300287] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.534 [2024-07-24 16:44:09.300308] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:12.534 [2024-07-24 16:44:09.300324] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.534 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.792 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.792 "name": "raid_bdev1", 00:27:12.792 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:12.792 "strip_size_kb": 0, 00:27:12.792 "state": "online", 00:27:12.792 "raid_level": "raid1", 00:27:12.792 "superblock": false, 00:27:12.792 "num_base_bdevs": 2, 00:27:12.792 "num_base_bdevs_discovered": 1, 00:27:12.792 "num_base_bdevs_operational": 1, 00:27:12.792 "base_bdevs_list": [ 00:27:12.792 { 00:27:12.792 "name": null, 00:27:12.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.792 "is_configured": false, 00:27:12.792 "data_offset": 0, 00:27:12.792 "data_size": 65536 00:27:12.792 }, 00:27:12.792 { 00:27:12.792 "name": "BaseBdev2", 00:27:12.792 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:12.792 "is_configured": true, 00:27:12.792 "data_offset": 0, 00:27:12.792 "data_size": 65536 00:27:12.792 } 00:27:12.792 ] 00:27:12.792 }' 00:27:12.792 16:44:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.792 16:44:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.359 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.617 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:13.617 "name": "raid_bdev1", 00:27:13.617 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:13.617 "strip_size_kb": 0, 00:27:13.617 "state": "online", 00:27:13.617 "raid_level": "raid1", 00:27:13.617 "superblock": false, 00:27:13.617 "num_base_bdevs": 2, 00:27:13.617 "num_base_bdevs_discovered": 1, 00:27:13.617 "num_base_bdevs_operational": 1, 00:27:13.617 "base_bdevs_list": [ 00:27:13.617 { 00:27:13.617 "name": null, 00:27:13.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.617 "is_configured": false, 00:27:13.617 "data_offset": 0, 00:27:13.617 "data_size": 65536 00:27:13.617 }, 00:27:13.617 { 00:27:13.617 "name": "BaseBdev2", 00:27:13.617 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:13.617 "is_configured": true, 00:27:13.617 "data_offset": 0, 00:27:13.617 "data_size": 65536 00:27:13.617 } 00:27:13.617 ] 00:27:13.617 }' 00:27:13.617 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:13.617 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:13.617 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:13.617 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:13.617 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:13.876 [2024-07-24 16:44:10.661878] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:13.876 [2024-07-24 16:44:10.684830] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d144d0 00:27:13.876 [2024-07-24 16:44:10.687129] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:13.876 16:44:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.250 "name": "raid_bdev1", 00:27:15.250 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:15.250 "strip_size_kb": 0, 00:27:15.250 "state": "online", 00:27:15.250 "raid_level": "raid1", 00:27:15.250 "superblock": false, 00:27:15.250 "num_base_bdevs": 2, 00:27:15.250 "num_base_bdevs_discovered": 2, 00:27:15.250 "num_base_bdevs_operational": 2, 00:27:15.250 "process": { 00:27:15.250 "type": "rebuild", 00:27:15.250 "target": "spare", 00:27:15.250 "progress": { 00:27:15.250 "blocks": 24576, 00:27:15.250 "percent": 37 00:27:15.250 } 00:27:15.250 }, 00:27:15.250 "base_bdevs_list": [ 00:27:15.250 { 00:27:15.250 "name": "spare", 00:27:15.250 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:15.250 "is_configured": true, 00:27:15.250 "data_offset": 0, 00:27:15.250 "data_size": 65536 00:27:15.250 }, 00:27:15.250 { 00:27:15.250 "name": "BaseBdev2", 00:27:15.250 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:15.250 "is_configured": true, 00:27:15.250 "data_offset": 0, 00:27:15.250 "data_size": 65536 00:27:15.250 } 00:27:15.250 ] 00:27:15.250 }' 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.250 16:44:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=856 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:15.250 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:15.251 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.251 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.251 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.516 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.516 "name": "raid_bdev1", 00:27:15.516 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:15.516 "strip_size_kb": 0, 00:27:15.516 "state": "online", 00:27:15.516 "raid_level": "raid1", 00:27:15.516 "superblock": false, 00:27:15.516 "num_base_bdevs": 2, 00:27:15.516 "num_base_bdevs_discovered": 2, 00:27:15.516 "num_base_bdevs_operational": 2, 00:27:15.516 "process": { 00:27:15.516 "type": "rebuild", 00:27:15.516 "target": "spare", 00:27:15.516 "progress": { 00:27:15.516 "blocks": 30720, 00:27:15.516 "percent": 46 00:27:15.516 } 00:27:15.516 }, 00:27:15.516 "base_bdevs_list": [ 00:27:15.516 { 00:27:15.516 "name": "spare", 00:27:15.516 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:15.516 "is_configured": true, 00:27:15.516 "data_offset": 0, 00:27:15.516 "data_size": 65536 00:27:15.516 }, 00:27:15.516 { 00:27:15.516 "name": "BaseBdev2", 00:27:15.516 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:15.516 "is_configured": true, 00:27:15.516 "data_offset": 0, 00:27:15.516 "data_size": 65536 00:27:15.516 } 00:27:15.516 ] 00:27:15.516 }' 00:27:15.516 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.516 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:15.516 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:15.516 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:15.516 16:44:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:16.486 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:16.486 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:16.486 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:16.486 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:16.486 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:16.486 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:16.744 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.744 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.744 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:16.744 "name": "raid_bdev1", 00:27:16.744 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:16.744 "strip_size_kb": 0, 00:27:16.744 "state": "online", 00:27:16.744 "raid_level": "raid1", 00:27:16.744 "superblock": false, 00:27:16.744 "num_base_bdevs": 2, 00:27:16.744 "num_base_bdevs_discovered": 2, 00:27:16.744 "num_base_bdevs_operational": 2, 00:27:16.744 "process": { 00:27:16.744 "type": "rebuild", 00:27:16.744 "target": "spare", 00:27:16.744 "progress": { 00:27:16.744 "blocks": 57344, 00:27:16.744 "percent": 87 00:27:16.744 } 00:27:16.744 }, 00:27:16.744 "base_bdevs_list": [ 00:27:16.744 { 00:27:16.744 "name": "spare", 00:27:16.744 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:16.744 "is_configured": true, 00:27:16.744 "data_offset": 0, 00:27:16.744 "data_size": 65536 00:27:16.744 }, 00:27:16.744 { 00:27:16.744 "name": "BaseBdev2", 00:27:16.744 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:16.744 "is_configured": true, 00:27:16.744 "data_offset": 0, 00:27:16.744 "data_size": 65536 00:27:16.744 } 00:27:16.744 ] 00:27:16.744 }' 00:27:16.744 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:17.003 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:17.003 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:17.003 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:17.003 16:44:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:17.261 [2024-07-24 16:44:13.912828] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:17.261 [2024-07-24 16:44:13.912906] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:17.261 [2024-07-24 16:44:13.912958] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.826 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.084 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.084 "name": "raid_bdev1", 00:27:18.084 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:18.084 "strip_size_kb": 0, 00:27:18.084 "state": "online", 00:27:18.084 "raid_level": "raid1", 00:27:18.084 "superblock": false, 00:27:18.084 "num_base_bdevs": 2, 00:27:18.084 "num_base_bdevs_discovered": 2, 00:27:18.084 "num_base_bdevs_operational": 2, 00:27:18.084 "base_bdevs_list": [ 00:27:18.084 { 00:27:18.084 "name": "spare", 00:27:18.084 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:18.084 "is_configured": true, 00:27:18.084 "data_offset": 0, 00:27:18.084 "data_size": 65536 00:27:18.084 }, 00:27:18.084 { 00:27:18.084 "name": "BaseBdev2", 00:27:18.084 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:18.084 "is_configured": true, 00:27:18.084 "data_offset": 0, 00:27:18.084 "data_size": 65536 00:27:18.084 } 00:27:18.084 ] 00:27:18.084 }' 00:27:18.084 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.084 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:18.084 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.342 16:44:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.342 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.342 "name": "raid_bdev1", 00:27:18.342 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:18.342 "strip_size_kb": 0, 00:27:18.342 "state": "online", 00:27:18.342 "raid_level": "raid1", 00:27:18.342 "superblock": false, 00:27:18.342 "num_base_bdevs": 2, 00:27:18.342 "num_base_bdevs_discovered": 2, 00:27:18.342 "num_base_bdevs_operational": 2, 00:27:18.342 "base_bdevs_list": [ 00:27:18.342 { 00:27:18.342 "name": "spare", 00:27:18.342 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:18.342 "is_configured": true, 00:27:18.342 "data_offset": 0, 00:27:18.342 "data_size": 65536 00:27:18.342 }, 00:27:18.342 { 00:27:18.342 "name": "BaseBdev2", 00:27:18.342 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:18.342 "is_configured": true, 00:27:18.342 "data_offset": 0, 00:27:18.342 "data_size": 65536 00:27:18.342 } 00:27:18.342 ] 00:27:18.342 }' 00:27:18.342 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.600 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.858 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.858 "name": "raid_bdev1", 00:27:18.858 "uuid": "1e0dc720-946d-468a-bc5f-ff094cb950c2", 00:27:18.858 "strip_size_kb": 0, 00:27:18.858 "state": "online", 00:27:18.858 "raid_level": "raid1", 00:27:18.858 "superblock": false, 00:27:18.858 "num_base_bdevs": 2, 00:27:18.858 "num_base_bdevs_discovered": 2, 00:27:18.858 "num_base_bdevs_operational": 2, 00:27:18.858 "base_bdevs_list": [ 00:27:18.858 { 00:27:18.858 "name": "spare", 00:27:18.858 "uuid": "3e9f776f-c3ce-5d31-adf4-1787923db732", 00:27:18.858 "is_configured": true, 00:27:18.858 "data_offset": 0, 00:27:18.858 "data_size": 65536 00:27:18.858 }, 00:27:18.858 { 00:27:18.858 "name": "BaseBdev2", 00:27:18.858 "uuid": "5d69a631-68c1-585a-b127-f4327808a395", 00:27:18.858 "is_configured": true, 00:27:18.858 "data_offset": 0, 00:27:18.858 "data_size": 65536 00:27:18.858 } 00:27:18.858 ] 00:27:18.858 }' 00:27:18.858 16:44:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.858 16:44:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.424 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:19.424 [2024-07-24 16:44:16.272271] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:19.424 [2024-07-24 16:44:16.272307] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.424 [2024-07-24 16:44:16.272389] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.424 [2024-07-24 16:44:16.272472] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.424 [2024-07-24 16:44:16.272488] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:27:19.682 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.682 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:27:19.682 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:19.682 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:19.682 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:19.682 16:44:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:19.683 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:19.941 /dev/nbd0 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:19.941 1+0 records in 00:27:19.941 1+0 records out 00:27:19.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248212 s, 16.5 MB/s 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:19.941 16:44:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:20.199 /dev/nbd1 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:20.199 1+0 records in 00:27:20.199 1+0 records out 00:27:20.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274492 s, 14.9 MB/s 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:20.199 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:20.457 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:20.457 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:20.457 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:20.457 16:44:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:20.458 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:20.715 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:20.716 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 1739665 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 1739665 ']' 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 1739665 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1739665 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1739665' 00:27:20.974 killing process with pid 1739665 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 1739665 00:27:20.974 Received shutdown signal, test time was about 60.000000 seconds 00:27:20.974 00:27:20.974 Latency(us) 00:27:20.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.974 =================================================================================================================== 00:27:20.974 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:20.974 [2024-07-24 16:44:17.835017] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:20.974 16:44:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 1739665 00:27:21.541 [2024-07-24 16:44:18.175602] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:27:23.442 00:27:23.442 real 0m24.810s 00:27:23.442 user 0m33.940s 00:27:23.442 sys 0m4.650s 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.442 ************************************ 00:27:23.442 END TEST raid_rebuild_test 00:27:23.442 ************************************ 00:27:23.442 16:44:19 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:27:23.442 16:44:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:23.442 16:44:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:23.442 16:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:23.442 ************************************ 00:27:23.442 START TEST raid_rebuild_test_sb 00:27:23.442 ************************************ 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=1744111 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 1744111 /var/tmp/spdk-raid.sock 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1744111 ']' 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:23.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:23.442 16:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:23.442 [2024-07-24 16:44:20.052685] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:27:23.442 [2024-07-24 16:44:20.052809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744111 ] 00:27:23.442 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:23.442 Zero copy mechanism will not be used. 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:23.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.442 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:23.443 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:23.443 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:23.443 [2024-07-24 16:44:20.277619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.701 [2024-07-24 16:44:20.537157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.267 [2024-07-24 16:44:20.870958] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:24.267 [2024-07-24 16:44:20.870993] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:24.268 16:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:24.268 16:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:27:24.268 16:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:24.268 16:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:24.526 BaseBdev1_malloc 00:27:24.526 16:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:24.784 [2024-07-24 16:44:21.547995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:24.784 [2024-07-24 16:44:21.548056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.784 [2024-07-24 16:44:21.548085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:24.784 [2024-07-24 16:44:21.548104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.784 [2024-07-24 16:44:21.550801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.784 [2024-07-24 16:44:21.550838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:24.784 BaseBdev1 00:27:24.784 16:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:24.784 16:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:25.041 BaseBdev2_malloc 00:27:25.041 16:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:25.298 [2024-07-24 16:44:22.077726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:25.298 [2024-07-24 16:44:22.077786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.298 [2024-07-24 16:44:22.077811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:25.298 [2024-07-24 16:44:22.077832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.298 [2024-07-24 16:44:22.080582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.298 [2024-07-24 16:44:22.080618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:25.298 BaseBdev2 00:27:25.298 16:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:25.556 spare_malloc 00:27:25.556 16:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:25.815 spare_delay 00:27:25.815 16:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:26.073 [2024-07-24 16:44:22.755630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:26.073 [2024-07-24 16:44:22.755695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.073 [2024-07-24 16:44:22.755725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:26.073 [2024-07-24 16:44:22.755742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.073 [2024-07-24 16:44:22.758535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.073 [2024-07-24 16:44:22.758573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:26.073 spare 00:27:26.073 16:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:26.331 [2024-07-24 16:44:22.984483] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:26.331 [2024-07-24 16:44:22.986798] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.331 [2024-07-24 16:44:22.987021] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:27:26.331 [2024-07-24 16:44:22.987046] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:26.331 [2024-07-24 16:44:22.987423] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:26.331 [2024-07-24 16:44:22.987672] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:27:26.331 [2024-07-24 16:44:22.987687] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:27:26.331 [2024-07-24 16:44:22.987895] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.331 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.589 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.589 "name": "raid_bdev1", 00:27:26.589 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:26.589 "strip_size_kb": 0, 00:27:26.589 "state": "online", 00:27:26.589 "raid_level": "raid1", 00:27:26.589 "superblock": true, 00:27:26.589 "num_base_bdevs": 2, 00:27:26.589 "num_base_bdevs_discovered": 2, 00:27:26.589 "num_base_bdevs_operational": 2, 00:27:26.589 "base_bdevs_list": [ 00:27:26.589 { 00:27:26.589 "name": "BaseBdev1", 00:27:26.589 "uuid": "e0bea36e-506f-57f7-92d4-4985fc0dc3fb", 00:27:26.589 "is_configured": true, 00:27:26.589 "data_offset": 2048, 00:27:26.589 "data_size": 63488 00:27:26.589 }, 00:27:26.589 { 00:27:26.589 "name": "BaseBdev2", 00:27:26.589 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:26.589 "is_configured": true, 00:27:26.589 "data_offset": 2048, 00:27:26.589 "data_size": 63488 00:27:26.589 } 00:27:26.589 ] 00:27:26.589 }' 00:27:26.589 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.589 16:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:27.154 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:27.154 16:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:27.155 [2024-07-24 16:44:24.003321] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:27.412 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:27.413 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:27.671 [2024-07-24 16:44:24.460269] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:27.671 /dev/nbd0 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:27.671 1+0 records in 00:27:27.671 1+0 records out 00:27:27.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026589 s, 15.4 MB/s 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:27.671 16:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:27:34.298 63488+0 records in 00:27:34.298 63488+0 records out 00:27:34.298 32505856 bytes (33 MB, 31 MiB) copied, 5.7588 s, 5.6 MB/s 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:34.298 [2024-07-24 16:44:30.529480] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:34.298 [2024-07-24 16:44:30.750190] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.298 "name": "raid_bdev1", 00:27:34.298 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:34.298 "strip_size_kb": 0, 00:27:34.298 "state": "online", 00:27:34.298 "raid_level": "raid1", 00:27:34.298 "superblock": true, 00:27:34.298 "num_base_bdevs": 2, 00:27:34.298 "num_base_bdevs_discovered": 1, 00:27:34.298 "num_base_bdevs_operational": 1, 00:27:34.298 "base_bdevs_list": [ 00:27:34.298 { 00:27:34.298 "name": null, 00:27:34.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.298 "is_configured": false, 00:27:34.298 "data_offset": 2048, 00:27:34.298 "data_size": 63488 00:27:34.298 }, 00:27:34.298 { 00:27:34.298 "name": "BaseBdev2", 00:27:34.298 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:34.298 "is_configured": true, 00:27:34.298 "data_offset": 2048, 00:27:34.298 "data_size": 63488 00:27:34.298 } 00:27:34.298 ] 00:27:34.298 }' 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.298 16:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.866 16:44:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:35.125 [2024-07-24 16:44:31.784998] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:35.125 [2024-07-24 16:44:31.812194] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caaba0 00:27:35.125 [2024-07-24 16:44:31.814511] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:35.125 16:44:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.062 16:44:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.320 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:36.320 "name": "raid_bdev1", 00:27:36.321 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:36.321 "strip_size_kb": 0, 00:27:36.321 "state": "online", 00:27:36.321 "raid_level": "raid1", 00:27:36.321 "superblock": true, 00:27:36.321 "num_base_bdevs": 2, 00:27:36.321 "num_base_bdevs_discovered": 2, 00:27:36.321 "num_base_bdevs_operational": 2, 00:27:36.321 "process": { 00:27:36.321 "type": "rebuild", 00:27:36.321 "target": "spare", 00:27:36.321 "progress": { 00:27:36.321 "blocks": 24576, 00:27:36.321 "percent": 38 00:27:36.321 } 00:27:36.321 }, 00:27:36.321 "base_bdevs_list": [ 00:27:36.321 { 00:27:36.321 "name": "spare", 00:27:36.321 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:36.321 "is_configured": true, 00:27:36.321 "data_offset": 2048, 00:27:36.321 "data_size": 63488 00:27:36.321 }, 00:27:36.321 { 00:27:36.321 "name": "BaseBdev2", 00:27:36.321 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:36.321 "is_configured": true, 00:27:36.321 "data_offset": 2048, 00:27:36.321 "data_size": 63488 00:27:36.321 } 00:27:36.321 ] 00:27:36.321 }' 00:27:36.321 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:36.321 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:36.321 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:36.321 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:36.321 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:36.580 [2024-07-24 16:44:33.363889] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:36.580 [2024-07-24 16:44:33.427427] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:36.580 [2024-07-24 16:44:33.427489] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:36.580 [2024-07-24 16:44:33.427510] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:36.580 [2024-07-24 16:44:33.427533] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.839 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.098 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.098 "name": "raid_bdev1", 00:27:37.098 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:37.098 "strip_size_kb": 0, 00:27:37.098 "state": "online", 00:27:37.098 "raid_level": "raid1", 00:27:37.098 "superblock": true, 00:27:37.098 "num_base_bdevs": 2, 00:27:37.098 "num_base_bdevs_discovered": 1, 00:27:37.098 "num_base_bdevs_operational": 1, 00:27:37.098 "base_bdevs_list": [ 00:27:37.098 { 00:27:37.098 "name": null, 00:27:37.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.098 "is_configured": false, 00:27:37.098 "data_offset": 2048, 00:27:37.098 "data_size": 63488 00:27:37.098 }, 00:27:37.098 { 00:27:37.098 "name": "BaseBdev2", 00:27:37.098 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:37.098 "is_configured": true, 00:27:37.098 "data_offset": 2048, 00:27:37.098 "data_size": 63488 00:27:37.098 } 00:27:37.098 ] 00:27:37.098 }' 00:27:37.098 16:44:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.098 16:44:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.666 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.925 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:37.925 "name": "raid_bdev1", 00:27:37.925 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:37.925 "strip_size_kb": 0, 00:27:37.925 "state": "online", 00:27:37.925 "raid_level": "raid1", 00:27:37.925 "superblock": true, 00:27:37.925 "num_base_bdevs": 2, 00:27:37.925 "num_base_bdevs_discovered": 1, 00:27:37.925 "num_base_bdevs_operational": 1, 00:27:37.925 "base_bdevs_list": [ 00:27:37.925 { 00:27:37.925 "name": null, 00:27:37.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.925 "is_configured": false, 00:27:37.925 "data_offset": 2048, 00:27:37.925 "data_size": 63488 00:27:37.925 }, 00:27:37.925 { 00:27:37.925 "name": "BaseBdev2", 00:27:37.925 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:37.925 "is_configured": true, 00:27:37.925 "data_offset": 2048, 00:27:37.925 "data_size": 63488 00:27:37.925 } 00:27:37.925 ] 00:27:37.925 }' 00:27:37.925 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:37.925 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:37.925 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:37.925 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:37.925 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:38.184 [2024-07-24 16:44:34.816757] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:38.184 [2024-07-24 16:44:34.841430] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caac70 00:27:38.184 [2024-07-24 16:44:34.843750] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:38.184 16:44:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.120 16:44:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.379 "name": "raid_bdev1", 00:27:39.379 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:39.379 "strip_size_kb": 0, 00:27:39.379 "state": "online", 00:27:39.379 "raid_level": "raid1", 00:27:39.379 "superblock": true, 00:27:39.379 "num_base_bdevs": 2, 00:27:39.379 "num_base_bdevs_discovered": 2, 00:27:39.379 "num_base_bdevs_operational": 2, 00:27:39.379 "process": { 00:27:39.379 "type": "rebuild", 00:27:39.379 "target": "spare", 00:27:39.379 "progress": { 00:27:39.379 "blocks": 24576, 00:27:39.379 "percent": 38 00:27:39.379 } 00:27:39.379 }, 00:27:39.379 "base_bdevs_list": [ 00:27:39.379 { 00:27:39.379 "name": "spare", 00:27:39.379 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:39.379 "is_configured": true, 00:27:39.379 "data_offset": 2048, 00:27:39.379 "data_size": 63488 00:27:39.379 }, 00:27:39.379 { 00:27:39.379 "name": "BaseBdev2", 00:27:39.379 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:39.379 "is_configured": true, 00:27:39.379 "data_offset": 2048, 00:27:39.379 "data_size": 63488 00:27:39.379 } 00:27:39.379 ] 00:27:39.379 }' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:39.379 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:39.379 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=880 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.380 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.639 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.639 "name": "raid_bdev1", 00:27:39.639 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:39.639 "strip_size_kb": 0, 00:27:39.639 "state": "online", 00:27:39.639 "raid_level": "raid1", 00:27:39.639 "superblock": true, 00:27:39.639 "num_base_bdevs": 2, 00:27:39.639 "num_base_bdevs_discovered": 2, 00:27:39.639 "num_base_bdevs_operational": 2, 00:27:39.639 "process": { 00:27:39.639 "type": "rebuild", 00:27:39.639 "target": "spare", 00:27:39.639 "progress": { 00:27:39.639 "blocks": 30720, 00:27:39.639 "percent": 48 00:27:39.639 } 00:27:39.639 }, 00:27:39.639 "base_bdevs_list": [ 00:27:39.639 { 00:27:39.639 "name": "spare", 00:27:39.639 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:39.639 "is_configured": true, 00:27:39.639 "data_offset": 2048, 00:27:39.639 "data_size": 63488 00:27:39.639 }, 00:27:39.639 { 00:27:39.639 "name": "BaseBdev2", 00:27:39.639 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:39.639 "is_configured": true, 00:27:39.639 "data_offset": 2048, 00:27:39.639 "data_size": 63488 00:27:39.639 } 00:27:39.639 ] 00:27:39.639 }' 00:27:39.639 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.639 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.639 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.897 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.897 16:44:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:40.833 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:40.833 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.833 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.833 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:40.833 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:40.833 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.834 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.834 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.092 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.092 "name": "raid_bdev1", 00:27:41.092 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:41.092 "strip_size_kb": 0, 00:27:41.092 "state": "online", 00:27:41.092 "raid_level": "raid1", 00:27:41.092 "superblock": true, 00:27:41.092 "num_base_bdevs": 2, 00:27:41.092 "num_base_bdevs_discovered": 2, 00:27:41.092 "num_base_bdevs_operational": 2, 00:27:41.092 "process": { 00:27:41.092 "type": "rebuild", 00:27:41.092 "target": "spare", 00:27:41.092 "progress": { 00:27:41.092 "blocks": 57344, 00:27:41.092 "percent": 90 00:27:41.092 } 00:27:41.092 }, 00:27:41.092 "base_bdevs_list": [ 00:27:41.092 { 00:27:41.092 "name": "spare", 00:27:41.092 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:41.092 "is_configured": true, 00:27:41.092 "data_offset": 2048, 00:27:41.092 "data_size": 63488 00:27:41.092 }, 00:27:41.092 { 00:27:41.092 "name": "BaseBdev2", 00:27:41.092 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:41.092 "is_configured": true, 00:27:41.092 "data_offset": 2048, 00:27:41.092 "data_size": 63488 00:27:41.092 } 00:27:41.092 ] 00:27:41.092 }' 00:27:41.092 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.092 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:41.092 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:41.092 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:41.092 16:44:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:41.351 [2024-07-24 16:44:37.968598] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:41.351 [2024-07-24 16:44:37.968673] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:41.351 [2024-07-24 16:44:37.968776] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.287 16:44:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.287 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.287 "name": "raid_bdev1", 00:27:42.287 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:42.287 "strip_size_kb": 0, 00:27:42.287 "state": "online", 00:27:42.287 "raid_level": "raid1", 00:27:42.287 "superblock": true, 00:27:42.287 "num_base_bdevs": 2, 00:27:42.287 "num_base_bdevs_discovered": 2, 00:27:42.287 "num_base_bdevs_operational": 2, 00:27:42.287 "base_bdevs_list": [ 00:27:42.287 { 00:27:42.287 "name": "spare", 00:27:42.287 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:42.287 "is_configured": true, 00:27:42.287 "data_offset": 2048, 00:27:42.287 "data_size": 63488 00:27:42.287 }, 00:27:42.287 { 00:27:42.287 "name": "BaseBdev2", 00:27:42.287 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:42.287 "is_configured": true, 00:27:42.287 "data_offset": 2048, 00:27:42.287 "data_size": 63488 00:27:42.287 } 00:27:42.287 ] 00:27:42.287 }' 00:27:42.287 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.287 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:42.287 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.546 "name": "raid_bdev1", 00:27:42.546 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:42.546 "strip_size_kb": 0, 00:27:42.546 "state": "online", 00:27:42.546 "raid_level": "raid1", 00:27:42.546 "superblock": true, 00:27:42.546 "num_base_bdevs": 2, 00:27:42.546 "num_base_bdevs_discovered": 2, 00:27:42.546 "num_base_bdevs_operational": 2, 00:27:42.546 "base_bdevs_list": [ 00:27:42.546 { 00:27:42.546 "name": "spare", 00:27:42.546 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:42.546 "is_configured": true, 00:27:42.546 "data_offset": 2048, 00:27:42.546 "data_size": 63488 00:27:42.546 }, 00:27:42.546 { 00:27:42.546 "name": "BaseBdev2", 00:27:42.546 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:42.546 "is_configured": true, 00:27:42.546 "data_offset": 2048, 00:27:42.546 "data_size": 63488 00:27:42.546 } 00:27:42.546 ] 00:27:42.546 }' 00:27:42.546 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.805 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.064 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.064 "name": "raid_bdev1", 00:27:43.064 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:43.064 "strip_size_kb": 0, 00:27:43.064 "state": "online", 00:27:43.064 "raid_level": "raid1", 00:27:43.064 "superblock": true, 00:27:43.064 "num_base_bdevs": 2, 00:27:43.064 "num_base_bdevs_discovered": 2, 00:27:43.064 "num_base_bdevs_operational": 2, 00:27:43.064 "base_bdevs_list": [ 00:27:43.064 { 00:27:43.064 "name": "spare", 00:27:43.064 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:43.064 "is_configured": true, 00:27:43.064 "data_offset": 2048, 00:27:43.064 "data_size": 63488 00:27:43.064 }, 00:27:43.064 { 00:27:43.064 "name": "BaseBdev2", 00:27:43.064 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:43.064 "is_configured": true, 00:27:43.064 "data_offset": 2048, 00:27:43.064 "data_size": 63488 00:27:43.064 } 00:27:43.064 ] 00:27:43.064 }' 00:27:43.064 16:44:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.064 16:44:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.631 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:43.889 [2024-07-24 16:44:40.508369] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:43.889 [2024-07-24 16:44:40.508404] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:43.889 [2024-07-24 16:44:40.508493] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.889 [2024-07-24 16:44:40.508577] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.889 [2024-07-24 16:44:40.508599] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:27:43.889 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.889 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:44.148 16:44:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:44.148 /dev/nbd0 00:27:44.148 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:44.406 1+0 records in 00:27:44.406 1+0 records out 00:27:44.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249547 s, 16.4 MB/s 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:44.406 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:44.406 /dev/nbd1 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:44.663 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:44.664 1+0 records in 00:27:44.664 1+0 records out 00:27:44.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327096 s, 12.5 MB/s 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.664 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:44.921 16:44:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:45.179 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:45.437 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:45.694 [2024-07-24 16:44:42.462063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:45.694 [2024-07-24 16:44:42.462123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:45.694 [2024-07-24 16:44:42.462158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:27:45.694 [2024-07-24 16:44:42.462174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:45.694 [2024-07-24 16:44:42.465005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:45.694 [2024-07-24 16:44:42.465041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:45.694 [2024-07-24 16:44:42.465156] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:45.694 [2024-07-24 16:44:42.465216] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:45.694 [2024-07-24 16:44:42.465425] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:45.694 spare 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.694 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.952 [2024-07-24 16:44:42.565765] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:27:45.952 [2024-07-24 16:44:42.565796] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:45.952 [2024-07-24 16:44:42.566128] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9320 00:27:45.952 [2024-07-24 16:44:42.566403] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:27:45.952 [2024-07-24 16:44:42.566419] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:27:45.952 [2024-07-24 16:44:42.566613] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:45.952 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.952 "name": "raid_bdev1", 00:27:45.952 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:45.952 "strip_size_kb": 0, 00:27:45.952 "state": "online", 00:27:45.952 "raid_level": "raid1", 00:27:45.952 "superblock": true, 00:27:45.952 "num_base_bdevs": 2, 00:27:45.952 "num_base_bdevs_discovered": 2, 00:27:45.952 "num_base_bdevs_operational": 2, 00:27:45.952 "base_bdevs_list": [ 00:27:45.952 { 00:27:45.952 "name": "spare", 00:27:45.952 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:45.952 "is_configured": true, 00:27:45.952 "data_offset": 2048, 00:27:45.952 "data_size": 63488 00:27:45.952 }, 00:27:45.952 { 00:27:45.952 "name": "BaseBdev2", 00:27:45.952 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:45.952 "is_configured": true, 00:27:45.952 "data_offset": 2048, 00:27:45.952 "data_size": 63488 00:27:45.952 } 00:27:45.952 ] 00:27:45.952 }' 00:27:45.952 16:44:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.952 16:44:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.515 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.800 "name": "raid_bdev1", 00:27:46.800 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:46.800 "strip_size_kb": 0, 00:27:46.800 "state": "online", 00:27:46.800 "raid_level": "raid1", 00:27:46.800 "superblock": true, 00:27:46.800 "num_base_bdevs": 2, 00:27:46.800 "num_base_bdevs_discovered": 2, 00:27:46.800 "num_base_bdevs_operational": 2, 00:27:46.800 "base_bdevs_list": [ 00:27:46.800 { 00:27:46.800 "name": "spare", 00:27:46.800 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:46.800 "is_configured": true, 00:27:46.800 "data_offset": 2048, 00:27:46.800 "data_size": 63488 00:27:46.800 }, 00:27:46.800 { 00:27:46.800 "name": "BaseBdev2", 00:27:46.800 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:46.800 "is_configured": true, 00:27:46.800 "data_offset": 2048, 00:27:46.800 "data_size": 63488 00:27:46.800 } 00:27:46.800 ] 00:27:46.800 }' 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.800 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:47.058 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.058 16:44:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:47.317 [2024-07-24 16:44:44.022598] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.317 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.575 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.575 "name": "raid_bdev1", 00:27:47.575 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:47.575 "strip_size_kb": 0, 00:27:47.575 "state": "online", 00:27:47.575 "raid_level": "raid1", 00:27:47.575 "superblock": true, 00:27:47.575 "num_base_bdevs": 2, 00:27:47.575 "num_base_bdevs_discovered": 1, 00:27:47.575 "num_base_bdevs_operational": 1, 00:27:47.575 "base_bdevs_list": [ 00:27:47.575 { 00:27:47.575 "name": null, 00:27:47.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.575 "is_configured": false, 00:27:47.575 "data_offset": 2048, 00:27:47.575 "data_size": 63488 00:27:47.575 }, 00:27:47.575 { 00:27:47.575 "name": "BaseBdev2", 00:27:47.575 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:47.575 "is_configured": true, 00:27:47.575 "data_offset": 2048, 00:27:47.575 "data_size": 63488 00:27:47.575 } 00:27:47.575 ] 00:27:47.575 }' 00:27:47.575 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.575 16:44:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:48.140 16:44:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:48.398 [2024-07-24 16:44:45.045373] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:48.398 [2024-07-24 16:44:45.045578] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:48.398 [2024-07-24 16:44:45.045604] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:48.398 [2024-07-24 16:44:45.045644] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:48.398 [2024-07-24 16:44:45.070980] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc93f0 00:27:48.398 [2024-07-24 16:44:45.073353] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:48.398 16:44:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.326 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.584 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.584 "name": "raid_bdev1", 00:27:49.584 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:49.584 "strip_size_kb": 0, 00:27:49.584 "state": "online", 00:27:49.584 "raid_level": "raid1", 00:27:49.584 "superblock": true, 00:27:49.584 "num_base_bdevs": 2, 00:27:49.584 "num_base_bdevs_discovered": 2, 00:27:49.584 "num_base_bdevs_operational": 2, 00:27:49.584 "process": { 00:27:49.584 "type": "rebuild", 00:27:49.584 "target": "spare", 00:27:49.584 "progress": { 00:27:49.584 "blocks": 24576, 00:27:49.584 "percent": 38 00:27:49.584 } 00:27:49.584 }, 00:27:49.584 "base_bdevs_list": [ 00:27:49.584 { 00:27:49.584 "name": "spare", 00:27:49.584 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:49.584 "is_configured": true, 00:27:49.584 "data_offset": 2048, 00:27:49.584 "data_size": 63488 00:27:49.584 }, 00:27:49.584 { 00:27:49.584 "name": "BaseBdev2", 00:27:49.584 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:49.584 "is_configured": true, 00:27:49.584 "data_offset": 2048, 00:27:49.584 "data_size": 63488 00:27:49.584 } 00:27:49.584 ] 00:27:49.584 }' 00:27:49.584 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.584 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.584 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.584 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.584 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:49.842 [2024-07-24 16:44:46.626756] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:49.842 [2024-07-24 16:44:46.686217] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:49.842 [2024-07-24 16:44:46.686284] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.842 [2024-07-24 16:44:46.686305] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:49.842 [2024-07-24 16:44:46.686319] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.100 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.357 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.357 "name": "raid_bdev1", 00:27:50.357 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:50.357 "strip_size_kb": 0, 00:27:50.357 "state": "online", 00:27:50.357 "raid_level": "raid1", 00:27:50.357 "superblock": true, 00:27:50.357 "num_base_bdevs": 2, 00:27:50.357 "num_base_bdevs_discovered": 1, 00:27:50.357 "num_base_bdevs_operational": 1, 00:27:50.357 "base_bdevs_list": [ 00:27:50.357 { 00:27:50.357 "name": null, 00:27:50.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.357 "is_configured": false, 00:27:50.357 "data_offset": 2048, 00:27:50.357 "data_size": 63488 00:27:50.357 }, 00:27:50.357 { 00:27:50.357 "name": "BaseBdev2", 00:27:50.357 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:50.357 "is_configured": true, 00:27:50.357 "data_offset": 2048, 00:27:50.357 "data_size": 63488 00:27:50.357 } 00:27:50.357 ] 00:27:50.357 }' 00:27:50.357 16:44:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.357 16:44:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.922 16:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:50.922 [2024-07-24 16:44:47.752194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:50.922 [2024-07-24 16:44:47.752261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:50.922 [2024-07-24 16:44:47.752287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:27:50.922 [2024-07-24 16:44:47.752304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:50.922 [2024-07-24 16:44:47.752913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:50.922 [2024-07-24 16:44:47.752943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:50.922 [2024-07-24 16:44:47.753056] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:50.922 [2024-07-24 16:44:47.753076] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:50.922 [2024-07-24 16:44:47.753091] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:50.922 [2024-07-24 16:44:47.753129] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:50.922 [2024-07-24 16:44:47.778871] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:27:50.922 spare 00:27:50.922 [2024-07-24 16:44:47.781201] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:51.179 16:44:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.112 16:44:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.371 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.371 "name": "raid_bdev1", 00:27:52.371 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:52.371 "strip_size_kb": 0, 00:27:52.371 "state": "online", 00:27:52.371 "raid_level": "raid1", 00:27:52.371 "superblock": true, 00:27:52.371 "num_base_bdevs": 2, 00:27:52.371 "num_base_bdevs_discovered": 2, 00:27:52.371 "num_base_bdevs_operational": 2, 00:27:52.371 "process": { 00:27:52.371 "type": "rebuild", 00:27:52.371 "target": "spare", 00:27:52.371 "progress": { 00:27:52.371 "blocks": 24576, 00:27:52.371 "percent": 38 00:27:52.371 } 00:27:52.371 }, 00:27:52.371 "base_bdevs_list": [ 00:27:52.371 { 00:27:52.371 "name": "spare", 00:27:52.371 "uuid": "ecd92967-3295-590a-ac1b-546d5915d0d1", 00:27:52.371 "is_configured": true, 00:27:52.371 "data_offset": 2048, 00:27:52.371 "data_size": 63488 00:27:52.371 }, 00:27:52.371 { 00:27:52.371 "name": "BaseBdev2", 00:27:52.371 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:52.371 "is_configured": true, 00:27:52.371 "data_offset": 2048, 00:27:52.371 "data_size": 63488 00:27:52.371 } 00:27:52.371 ] 00:27:52.371 }' 00:27:52.371 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.371 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:52.371 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.371 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:52.371 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:52.628 [2024-07-24 16:44:49.322621] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:52.628 [2024-07-24 16:44:49.394133] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:52.629 [2024-07-24 16:44:49.394199] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.629 [2024-07-24 16:44:49.394223] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:52.629 [2024-07-24 16:44:49.394235] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.629 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.886 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.886 "name": "raid_bdev1", 00:27:52.886 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:52.886 "strip_size_kb": 0, 00:27:52.886 "state": "online", 00:27:52.886 "raid_level": "raid1", 00:27:52.886 "superblock": true, 00:27:52.886 "num_base_bdevs": 2, 00:27:52.886 "num_base_bdevs_discovered": 1, 00:27:52.886 "num_base_bdevs_operational": 1, 00:27:52.886 "base_bdevs_list": [ 00:27:52.886 { 00:27:52.886 "name": null, 00:27:52.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.886 "is_configured": false, 00:27:52.886 "data_offset": 2048, 00:27:52.886 "data_size": 63488 00:27:52.886 }, 00:27:52.886 { 00:27:52.886 "name": "BaseBdev2", 00:27:52.886 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:52.886 "is_configured": true, 00:27:52.886 "data_offset": 2048, 00:27:52.886 "data_size": 63488 00:27:52.886 } 00:27:52.886 ] 00:27:52.886 }' 00:27:52.886 16:44:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.886 16:44:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.449 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.705 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:53.705 "name": "raid_bdev1", 00:27:53.705 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:53.705 "strip_size_kb": 0, 00:27:53.705 "state": "online", 00:27:53.705 "raid_level": "raid1", 00:27:53.705 "superblock": true, 00:27:53.705 "num_base_bdevs": 2, 00:27:53.705 "num_base_bdevs_discovered": 1, 00:27:53.705 "num_base_bdevs_operational": 1, 00:27:53.705 "base_bdevs_list": [ 00:27:53.705 { 00:27:53.705 "name": null, 00:27:53.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:53.705 "is_configured": false, 00:27:53.705 "data_offset": 2048, 00:27:53.705 "data_size": 63488 00:27:53.705 }, 00:27:53.705 { 00:27:53.705 "name": "BaseBdev2", 00:27:53.705 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:53.705 "is_configured": true, 00:27:53.705 "data_offset": 2048, 00:27:53.705 "data_size": 63488 00:27:53.705 } 00:27:53.705 ] 00:27:53.705 }' 00:27:53.705 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:53.705 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:53.705 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:53.961 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:53.961 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:54.218 16:44:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:54.218 [2024-07-24 16:44:51.022647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:54.218 [2024-07-24 16:44:51.022713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:54.219 [2024-07-24 16:44:51.022742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:27:54.219 [2024-07-24 16:44:51.022757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:54.219 [2024-07-24 16:44:51.023350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:54.219 [2024-07-24 16:44:51.023377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:54.219 [2024-07-24 16:44:51.023474] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:54.219 [2024-07-24 16:44:51.023493] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:54.219 [2024-07-24 16:44:51.023509] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:54.219 BaseBdev1 00:27:54.219 16:44:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.590 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.847 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:55.847 "name": "raid_bdev1", 00:27:55.848 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:55.848 "strip_size_kb": 0, 00:27:55.848 "state": "online", 00:27:55.848 "raid_level": "raid1", 00:27:55.848 "superblock": true, 00:27:55.848 "num_base_bdevs": 2, 00:27:55.848 "num_base_bdevs_discovered": 1, 00:27:55.848 "num_base_bdevs_operational": 1, 00:27:55.848 "base_bdevs_list": [ 00:27:55.848 { 00:27:55.848 "name": null, 00:27:55.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:55.848 "is_configured": false, 00:27:55.848 "data_offset": 2048, 00:27:55.848 "data_size": 63488 00:27:55.848 }, 00:27:55.848 { 00:27:55.848 "name": "BaseBdev2", 00:27:55.848 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:55.848 "is_configured": true, 00:27:55.848 "data_offset": 2048, 00:27:55.848 "data_size": 63488 00:27:55.848 } 00:27:55.848 ] 00:27:55.848 }' 00:27:55.848 16:44:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:55.848 16:44:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.414 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.672 "name": "raid_bdev1", 00:27:56.672 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:56.672 "strip_size_kb": 0, 00:27:56.672 "state": "online", 00:27:56.672 "raid_level": "raid1", 00:27:56.672 "superblock": true, 00:27:56.672 "num_base_bdevs": 2, 00:27:56.672 "num_base_bdevs_discovered": 1, 00:27:56.672 "num_base_bdevs_operational": 1, 00:27:56.672 "base_bdevs_list": [ 00:27:56.672 { 00:27:56.672 "name": null, 00:27:56.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.672 "is_configured": false, 00:27:56.672 "data_offset": 2048, 00:27:56.672 "data_size": 63488 00:27:56.672 }, 00:27:56.672 { 00:27:56.672 "name": "BaseBdev2", 00:27:56.672 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:56.672 "is_configured": true, 00:27:56.672 "data_offset": 2048, 00:27:56.672 "data_size": 63488 00:27:56.672 } 00:27:56.672 ] 00:27:56.672 }' 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:56.672 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:56.932 [2024-07-24 16:44:53.681820] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:56.932 [2024-07-24 16:44:53.681987] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:56.932 [2024-07-24 16:44:53.682007] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:56.932 request: 00:27:56.932 { 00:27:56.932 "base_bdev": "BaseBdev1", 00:27:56.932 "raid_bdev": "raid_bdev1", 00:27:56.932 "method": "bdev_raid_add_base_bdev", 00:27:56.932 "req_id": 1 00:27:56.932 } 00:27:56.932 Got JSON-RPC error response 00:27:56.932 response: 00:27:56.932 { 00:27:56.932 "code": -22, 00:27:56.932 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:56.932 } 00:27:56.932 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:27:56.932 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:56.932 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:56.932 16:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:56.932 16:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.863 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.120 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:58.121 "name": "raid_bdev1", 00:27:58.121 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:58.121 "strip_size_kb": 0, 00:27:58.121 "state": "online", 00:27:58.121 "raid_level": "raid1", 00:27:58.121 "superblock": true, 00:27:58.121 "num_base_bdevs": 2, 00:27:58.121 "num_base_bdevs_discovered": 1, 00:27:58.121 "num_base_bdevs_operational": 1, 00:27:58.121 "base_bdevs_list": [ 00:27:58.121 { 00:27:58.121 "name": null, 00:27:58.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.121 "is_configured": false, 00:27:58.121 "data_offset": 2048, 00:27:58.121 "data_size": 63488 00:27:58.121 }, 00:27:58.121 { 00:27:58.121 "name": "BaseBdev2", 00:27:58.121 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:58.121 "is_configured": true, 00:27:58.121 "data_offset": 2048, 00:27:58.121 "data_size": 63488 00:27:58.121 } 00:27:58.121 ] 00:27:58.121 }' 00:27:58.121 16:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:58.121 16:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.684 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.940 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.940 "name": "raid_bdev1", 00:27:58.940 "uuid": "ecd4417f-a402-4376-82bd-0a7905152d92", 00:27:58.940 "strip_size_kb": 0, 00:27:58.940 "state": "online", 00:27:58.940 "raid_level": "raid1", 00:27:58.940 "superblock": true, 00:27:58.940 "num_base_bdevs": 2, 00:27:58.940 "num_base_bdevs_discovered": 1, 00:27:58.940 "num_base_bdevs_operational": 1, 00:27:58.940 "base_bdevs_list": [ 00:27:58.940 { 00:27:58.940 "name": null, 00:27:58.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:58.940 "is_configured": false, 00:27:58.940 "data_offset": 2048, 00:27:58.940 "data_size": 63488 00:27:58.940 }, 00:27:58.940 { 00:27:58.940 "name": "BaseBdev2", 00:27:58.940 "uuid": "9eda86d7-f54a-5924-9dbe-a833cae221ca", 00:27:58.940 "is_configured": true, 00:27:58.940 "data_offset": 2048, 00:27:58.940 "data_size": 63488 00:27:58.940 } 00:27:58.940 ] 00:27:58.940 }' 00:27:58.940 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:58.940 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:58.940 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 1744111 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1744111 ']' 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 1744111 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1744111 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1744111' 00:27:59.195 killing process with pid 1744111 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 1744111 00:27:59.195 Received shutdown signal, test time was about 60.000000 seconds 00:27:59.195 00:27:59.195 Latency(us) 00:27:59.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.195 =================================================================================================================== 00:27:59.195 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:59.195 [2024-07-24 16:44:55.884339] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:59.195 [2024-07-24 16:44:55.884469] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:59.195 16:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 1744111 00:27:59.195 [2024-07-24 16:44:55.884534] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:59.195 [2024-07-24 16:44:55.884550] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:27:59.451 [2024-07-24 16:44:56.201627] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:28:01.377 00:28:01.377 real 0m37.936s 00:28:01.377 user 0m53.050s 00:28:01.377 sys 0m6.865s 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:01.377 ************************************ 00:28:01.377 END TEST raid_rebuild_test_sb 00:28:01.377 ************************************ 00:28:01.377 16:44:57 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:28:01.377 16:44:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:01.377 16:44:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.377 16:44:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:01.377 ************************************ 00:28:01.377 START TEST raid_rebuild_test_io 00:28:01.377 ************************************ 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=1750843 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 1750843 /var/tmp/spdk-raid.sock 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 1750843 ']' 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:01.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.377 16:44:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:01.377 [2024-07-24 16:44:58.074954] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:28:01.377 [2024-07-24 16:44:58.075082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1750843 ] 00:28:01.377 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:01.377 Zero copy mechanism will not be used. 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:01.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.377 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:01.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.378 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:01.636 [2024-07-24 16:44:58.299952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.894 [2024-07-24 16:44:58.563900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.152 [2024-07-24 16:44:58.896367] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:02.152 [2024-07-24 16:44:58.896406] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:02.409 16:44:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.409 16:44:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:28:02.409 16:44:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:02.409 16:44:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:02.666 BaseBdev1_malloc 00:28:02.666 16:44:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:02.923 [2024-07-24 16:44:59.576023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:02.923 [2024-07-24 16:44:59.576092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.923 [2024-07-24 16:44:59.576123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:02.923 [2024-07-24 16:44:59.576151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.923 [2024-07-24 16:44:59.578909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.923 [2024-07-24 16:44:59.578953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:02.923 BaseBdev1 00:28:02.923 16:44:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:02.923 16:44:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:03.180 BaseBdev2_malloc 00:28:03.180 16:44:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:03.437 [2024-07-24 16:45:00.068205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:03.437 [2024-07-24 16:45:00.068272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:03.437 [2024-07-24 16:45:00.068301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:03.437 [2024-07-24 16:45:00.068322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:03.437 [2024-07-24 16:45:00.071100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:03.437 [2024-07-24 16:45:00.071149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:03.437 BaseBdev2 00:28:03.437 16:45:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:03.695 spare_malloc 00:28:03.695 16:45:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:03.952 spare_delay 00:28:03.952 16:45:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:03.952 [2024-07-24 16:45:00.799363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:03.952 [2024-07-24 16:45:00.799426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:03.952 [2024-07-24 16:45:00.799455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:28:03.952 [2024-07-24 16:45:00.799473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:03.952 [2024-07-24 16:45:00.802286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:03.952 [2024-07-24 16:45:00.802324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:03.952 spare 00:28:04.210 16:45:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:04.210 [2024-07-24 16:45:01.011946] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:04.210 [2024-07-24 16:45:01.014253] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:04.210 [2024-07-24 16:45:01.014359] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:28:04.210 [2024-07-24 16:45:01.014378] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:04.210 [2024-07-24 16:45:01.014743] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:28:04.210 [2024-07-24 16:45:01.014983] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:28:04.210 [2024-07-24 16:45:01.015003] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:28:04.210 [2024-07-24 16:45:01.015253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.210 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.467 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.467 "name": "raid_bdev1", 00:28:04.467 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:04.467 "strip_size_kb": 0, 00:28:04.467 "state": "online", 00:28:04.467 "raid_level": "raid1", 00:28:04.467 "superblock": false, 00:28:04.467 "num_base_bdevs": 2, 00:28:04.467 "num_base_bdevs_discovered": 2, 00:28:04.467 "num_base_bdevs_operational": 2, 00:28:04.467 "base_bdevs_list": [ 00:28:04.467 { 00:28:04.467 "name": "BaseBdev1", 00:28:04.467 "uuid": "0afea097-85ae-58b4-925d-5cc3f8a8f1cf", 00:28:04.467 "is_configured": true, 00:28:04.467 "data_offset": 0, 00:28:04.467 "data_size": 65536 00:28:04.467 }, 00:28:04.467 { 00:28:04.467 "name": "BaseBdev2", 00:28:04.467 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:04.467 "is_configured": true, 00:28:04.467 "data_offset": 0, 00:28:04.467 "data_size": 65536 00:28:04.467 } 00:28:04.467 ] 00:28:04.467 }' 00:28:04.467 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.467 16:45:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:05.032 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:05.033 16:45:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:05.290 [2024-07-24 16:45:02.051081] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:05.290 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:28:05.290 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.290 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:05.546 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:28:05.546 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:28:05.546 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:05.546 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:05.546 [2024-07-24 16:45:02.403890] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:28:05.546 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:05.547 Zero copy mechanism will not be used. 00:28:05.547 Running I/O for 60 seconds... 00:28:05.804 [2024-07-24 16:45:02.516675] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:05.804 [2024-07-24 16:45:02.524558] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.804 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.062 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.062 "name": "raid_bdev1", 00:28:06.062 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:06.062 "strip_size_kb": 0, 00:28:06.062 "state": "online", 00:28:06.062 "raid_level": "raid1", 00:28:06.062 "superblock": false, 00:28:06.062 "num_base_bdevs": 2, 00:28:06.062 "num_base_bdevs_discovered": 1, 00:28:06.062 "num_base_bdevs_operational": 1, 00:28:06.062 "base_bdevs_list": [ 00:28:06.062 { 00:28:06.062 "name": null, 00:28:06.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.062 "is_configured": false, 00:28:06.062 "data_offset": 0, 00:28:06.062 "data_size": 65536 00:28:06.062 }, 00:28:06.062 { 00:28:06.062 "name": "BaseBdev2", 00:28:06.062 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:06.062 "is_configured": true, 00:28:06.062 "data_offset": 0, 00:28:06.062 "data_size": 65536 00:28:06.062 } 00:28:06.062 ] 00:28:06.062 }' 00:28:06.062 16:45:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.062 16:45:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:06.626 16:45:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:06.883 [2024-07-24 16:45:03.575599] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:06.883 16:45:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:06.883 [2024-07-24 16:45:03.654220] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:06.883 [2024-07-24 16:45:03.656580] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:07.140 [2024-07-24 16:45:03.774461] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:07.140 [2024-07-24 16:45:03.774790] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:07.140 [2024-07-24 16:45:03.986119] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:07.140 [2024-07-24 16:45:03.986368] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:07.704 [2024-07-24 16:45:04.310085] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:07.704 [2024-07-24 16:45:04.413120] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:07.704 [2024-07-24 16:45:04.413357] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.962 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.962 [2024-07-24 16:45:04.751924] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:07.962 [2024-07-24 16:45:04.752373] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:08.220 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:08.220 "name": "raid_bdev1", 00:28:08.220 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:08.220 "strip_size_kb": 0, 00:28:08.220 "state": "online", 00:28:08.220 "raid_level": "raid1", 00:28:08.220 "superblock": false, 00:28:08.220 "num_base_bdevs": 2, 00:28:08.220 "num_base_bdevs_discovered": 2, 00:28:08.220 "num_base_bdevs_operational": 2, 00:28:08.220 "process": { 00:28:08.220 "type": "rebuild", 00:28:08.220 "target": "spare", 00:28:08.220 "progress": { 00:28:08.220 "blocks": 14336, 00:28:08.220 "percent": 21 00:28:08.220 } 00:28:08.220 }, 00:28:08.220 "base_bdevs_list": [ 00:28:08.220 { 00:28:08.220 "name": "spare", 00:28:08.220 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:08.220 "is_configured": true, 00:28:08.220 "data_offset": 0, 00:28:08.220 "data_size": 65536 00:28:08.220 }, 00:28:08.220 { 00:28:08.220 "name": "BaseBdev2", 00:28:08.220 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:08.220 "is_configured": true, 00:28:08.220 "data_offset": 0, 00:28:08.220 "data_size": 65536 00:28:08.220 } 00:28:08.220 ] 00:28:08.220 }' 00:28:08.220 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:08.220 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:08.220 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:08.220 [2024-07-24 16:45:04.971620] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:08.220 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:08.220 16:45:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:08.478 [2024-07-24 16:45:05.186266] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:08.478 [2024-07-24 16:45:05.307778] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:08.478 [2024-07-24 16:45:05.317405] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.478 [2024-07-24 16:45:05.317444] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:08.478 [2024-07-24 16:45:05.317462] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:08.736 [2024-07-24 16:45:05.363817] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.736 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.994 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.994 "name": "raid_bdev1", 00:28:08.994 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:08.994 "strip_size_kb": 0, 00:28:08.994 "state": "online", 00:28:08.994 "raid_level": "raid1", 00:28:08.994 "superblock": false, 00:28:08.994 "num_base_bdevs": 2, 00:28:08.994 "num_base_bdevs_discovered": 1, 00:28:08.994 "num_base_bdevs_operational": 1, 00:28:08.994 "base_bdevs_list": [ 00:28:08.994 { 00:28:08.994 "name": null, 00:28:08.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.994 "is_configured": false, 00:28:08.994 "data_offset": 0, 00:28:08.994 "data_size": 65536 00:28:08.994 }, 00:28:08.994 { 00:28:08.994 "name": "BaseBdev2", 00:28:08.994 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:08.994 "is_configured": true, 00:28:08.994 "data_offset": 0, 00:28:08.994 "data_size": 65536 00:28:08.994 } 00:28:08.994 ] 00:28:08.994 }' 00:28:08.994 16:45:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.994 16:45:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.560 "name": "raid_bdev1", 00:28:09.560 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:09.560 "strip_size_kb": 0, 00:28:09.560 "state": "online", 00:28:09.560 "raid_level": "raid1", 00:28:09.560 "superblock": false, 00:28:09.560 "num_base_bdevs": 2, 00:28:09.560 "num_base_bdevs_discovered": 1, 00:28:09.560 "num_base_bdevs_operational": 1, 00:28:09.560 "base_bdevs_list": [ 00:28:09.560 { 00:28:09.560 "name": null, 00:28:09.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.560 "is_configured": false, 00:28:09.560 "data_offset": 0, 00:28:09.560 "data_size": 65536 00:28:09.560 }, 00:28:09.560 { 00:28:09.560 "name": "BaseBdev2", 00:28:09.560 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:09.560 "is_configured": true, 00:28:09.560 "data_offset": 0, 00:28:09.560 "data_size": 65536 00:28:09.560 } 00:28:09.560 ] 00:28:09.560 }' 00:28:09.560 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:09.818 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:09.818 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:09.818 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:09.818 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:10.076 [2024-07-24 16:45:06.713323] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:10.076 16:45:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:10.076 [2024-07-24 16:45:06.814642] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:28:10.076 [2024-07-24 16:45:06.816968] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:10.334 [2024-07-24 16:45:07.082078] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:10.334 [2024-07-24 16:45:07.082341] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:10.592 [2024-07-24 16:45:07.428530] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:10.592 [2024-07-24 16:45:07.428971] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:10.850 [2024-07-24 16:45:07.673351] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.108 16:45:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.108 [2024-07-24 16:45:07.935772] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:11.366 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:11.366 "name": "raid_bdev1", 00:28:11.366 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:11.366 "strip_size_kb": 0, 00:28:11.366 "state": "online", 00:28:11.366 "raid_level": "raid1", 00:28:11.366 "superblock": false, 00:28:11.366 "num_base_bdevs": 2, 00:28:11.366 "num_base_bdevs_discovered": 2, 00:28:11.366 "num_base_bdevs_operational": 2, 00:28:11.366 "process": { 00:28:11.366 "type": "rebuild", 00:28:11.366 "target": "spare", 00:28:11.366 "progress": { 00:28:11.366 "blocks": 14336, 00:28:11.366 "percent": 21 00:28:11.366 } 00:28:11.366 }, 00:28:11.366 "base_bdevs_list": [ 00:28:11.366 { 00:28:11.366 "name": "spare", 00:28:11.366 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:11.366 "is_configured": true, 00:28:11.366 "data_offset": 0, 00:28:11.366 "data_size": 65536 00:28:11.366 }, 00:28:11.366 { 00:28:11.366 "name": "BaseBdev2", 00:28:11.366 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:11.366 "is_configured": true, 00:28:11.366 "data_offset": 0, 00:28:11.367 "data_size": 65536 00:28:11.367 } 00:28:11.367 ] 00:28:11.367 }' 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:11.367 [2024-07-24 16:45:08.079905] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=912 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.367 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.625 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:11.625 "name": "raid_bdev1", 00:28:11.625 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:11.625 "strip_size_kb": 0, 00:28:11.625 "state": "online", 00:28:11.625 "raid_level": "raid1", 00:28:11.625 "superblock": false, 00:28:11.625 "num_base_bdevs": 2, 00:28:11.625 "num_base_bdevs_discovered": 2, 00:28:11.625 "num_base_bdevs_operational": 2, 00:28:11.625 "process": { 00:28:11.625 "type": "rebuild", 00:28:11.625 "target": "spare", 00:28:11.625 "progress": { 00:28:11.625 "blocks": 18432, 00:28:11.625 "percent": 28 00:28:11.625 } 00:28:11.625 }, 00:28:11.625 "base_bdevs_list": [ 00:28:11.625 { 00:28:11.625 "name": "spare", 00:28:11.625 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:11.625 "is_configured": true, 00:28:11.625 "data_offset": 0, 00:28:11.625 "data_size": 65536 00:28:11.625 }, 00:28:11.625 { 00:28:11.625 "name": "BaseBdev2", 00:28:11.625 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:11.625 "is_configured": true, 00:28:11.625 "data_offset": 0, 00:28:11.625 "data_size": 65536 00:28:11.625 } 00:28:11.625 ] 00:28:11.625 }' 00:28:11.625 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:11.625 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:11.625 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:11.625 [2024-07-24 16:45:08.444526] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:11.625 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:11.625 16:45:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:11.883 [2024-07-24 16:45:08.547271] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:12.141 [2024-07-24 16:45:08.768309] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:12.141 [2024-07-24 16:45:08.910991] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.706 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.964 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:12.964 "name": "raid_bdev1", 00:28:12.964 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:12.964 "strip_size_kb": 0, 00:28:12.964 "state": "online", 00:28:12.964 "raid_level": "raid1", 00:28:12.964 "superblock": false, 00:28:12.964 "num_base_bdevs": 2, 00:28:12.964 "num_base_bdevs_discovered": 2, 00:28:12.964 "num_base_bdevs_operational": 2, 00:28:12.964 "process": { 00:28:12.964 "type": "rebuild", 00:28:12.964 "target": "spare", 00:28:12.964 "progress": { 00:28:12.964 "blocks": 38912, 00:28:12.964 "percent": 59 00:28:12.964 } 00:28:12.964 }, 00:28:12.964 "base_bdevs_list": [ 00:28:12.964 { 00:28:12.964 "name": "spare", 00:28:12.964 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:12.964 "is_configured": true, 00:28:12.964 "data_offset": 0, 00:28:12.964 "data_size": 65536 00:28:12.964 }, 00:28:12.964 { 00:28:12.964 "name": "BaseBdev2", 00:28:12.964 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:12.964 "is_configured": true, 00:28:12.964 "data_offset": 0, 00:28:12.964 "data_size": 65536 00:28:12.964 } 00:28:12.964 ] 00:28:12.964 }' 00:28:12.964 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:12.964 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:12.964 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:12.964 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:12.964 16:45:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:13.222 [2024-07-24 16:45:10.071029] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:28:13.820 [2024-07-24 16:45:10.411875] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.078 16:45:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.336 16:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:14.336 "name": "raid_bdev1", 00:28:14.336 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:14.336 "strip_size_kb": 0, 00:28:14.336 "state": "online", 00:28:14.336 "raid_level": "raid1", 00:28:14.336 "superblock": false, 00:28:14.336 "num_base_bdevs": 2, 00:28:14.336 "num_base_bdevs_discovered": 2, 00:28:14.336 "num_base_bdevs_operational": 2, 00:28:14.336 "process": { 00:28:14.336 "type": "rebuild", 00:28:14.336 "target": "spare", 00:28:14.336 "progress": { 00:28:14.336 "blocks": 61440, 00:28:14.336 "percent": 93 00:28:14.336 } 00:28:14.336 }, 00:28:14.336 "base_bdevs_list": [ 00:28:14.336 { 00:28:14.336 "name": "spare", 00:28:14.337 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:14.337 "is_configured": true, 00:28:14.337 "data_offset": 0, 00:28:14.337 "data_size": 65536 00:28:14.337 }, 00:28:14.337 { 00:28:14.337 "name": "BaseBdev2", 00:28:14.337 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:14.337 "is_configured": true, 00:28:14.337 "data_offset": 0, 00:28:14.337 "data_size": 65536 00:28:14.337 } 00:28:14.337 ] 00:28:14.337 }' 00:28:14.337 16:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:14.337 16:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:14.337 16:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:14.337 16:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:14.337 16:45:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:14.337 [2024-07-24 16:45:11.189288] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:14.595 [2024-07-24 16:45:11.289552] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:14.595 [2024-07-24 16:45:11.291287] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.529 "name": "raid_bdev1", 00:28:15.529 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:15.529 "strip_size_kb": 0, 00:28:15.529 "state": "online", 00:28:15.529 "raid_level": "raid1", 00:28:15.529 "superblock": false, 00:28:15.529 "num_base_bdevs": 2, 00:28:15.529 "num_base_bdevs_discovered": 2, 00:28:15.529 "num_base_bdevs_operational": 2, 00:28:15.529 "base_bdevs_list": [ 00:28:15.529 { 00:28:15.529 "name": "spare", 00:28:15.529 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:15.529 "is_configured": true, 00:28:15.529 "data_offset": 0, 00:28:15.529 "data_size": 65536 00:28:15.529 }, 00:28:15.529 { 00:28:15.529 "name": "BaseBdev2", 00:28:15.529 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:15.529 "is_configured": true, 00:28:15.529 "data_offset": 0, 00:28:15.529 "data_size": 65536 00:28:15.529 } 00:28:15.529 ] 00:28:15.529 }' 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:15.529 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.787 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:16.045 "name": "raid_bdev1", 00:28:16.045 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:16.045 "strip_size_kb": 0, 00:28:16.045 "state": "online", 00:28:16.045 "raid_level": "raid1", 00:28:16.045 "superblock": false, 00:28:16.045 "num_base_bdevs": 2, 00:28:16.045 "num_base_bdevs_discovered": 2, 00:28:16.045 "num_base_bdevs_operational": 2, 00:28:16.045 "base_bdevs_list": [ 00:28:16.045 { 00:28:16.045 "name": "spare", 00:28:16.045 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:16.045 "is_configured": true, 00:28:16.045 "data_offset": 0, 00:28:16.045 "data_size": 65536 00:28:16.045 }, 00:28:16.045 { 00:28:16.045 "name": "BaseBdev2", 00:28:16.045 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:16.045 "is_configured": true, 00:28:16.045 "data_offset": 0, 00:28:16.045 "data_size": 65536 00:28:16.045 } 00:28:16.045 ] 00:28:16.045 }' 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.045 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.046 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.304 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.304 "name": "raid_bdev1", 00:28:16.304 "uuid": "c5ddde84-cc32-487f-8a59-cd2db4cb8fa6", 00:28:16.304 "strip_size_kb": 0, 00:28:16.304 "state": "online", 00:28:16.304 "raid_level": "raid1", 00:28:16.304 "superblock": false, 00:28:16.304 "num_base_bdevs": 2, 00:28:16.304 "num_base_bdevs_discovered": 2, 00:28:16.304 "num_base_bdevs_operational": 2, 00:28:16.304 "base_bdevs_list": [ 00:28:16.304 { 00:28:16.304 "name": "spare", 00:28:16.304 "uuid": "503347bf-54bb-5bea-a9a3-51e2137412a5", 00:28:16.304 "is_configured": true, 00:28:16.304 "data_offset": 0, 00:28:16.304 "data_size": 65536 00:28:16.304 }, 00:28:16.304 { 00:28:16.304 "name": "BaseBdev2", 00:28:16.304 "uuid": "8d8d3804-6717-5221-b403-cd7eb168f423", 00:28:16.304 "is_configured": true, 00:28:16.304 "data_offset": 0, 00:28:16.304 "data_size": 65536 00:28:16.304 } 00:28:16.304 ] 00:28:16.304 }' 00:28:16.304 16:45:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.304 16:45:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:16.868 16:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:16.868 [2024-07-24 16:45:13.722398] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:16.868 [2024-07-24 16:45:13.722436] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:17.126 00:28:17.126 Latency(us) 00:28:17.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.126 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:17.126 raid_bdev1 : 11.37 101.42 304.26 0.00 0.00 13545.43 329.32 118279.37 00:28:17.126 =================================================================================================================== 00:28:17.126 Total : 101.42 304.26 0.00 0.00 13545.43 329.32 118279.37 00:28:17.126 [2024-07-24 16:45:13.831944] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:17.126 [2024-07-24 16:45:13.831987] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:17.126 [2024-07-24 16:45:13.832116] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:17.126 [2024-07-24 16:45:13.832133] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:28:17.126 0 00:28:17.126 16:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.126 16:45:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.384 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:17.642 /dev/nbd0 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:17.642 1+0 records in 00:28:17.642 1+0 records out 00:28:17.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184369 s, 22.2 MB/s 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:17.642 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.643 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:17.900 /dev/nbd1 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:17.900 1+0 records in 00:28:17.900 1+0 records out 00:28:17.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152322 s, 26.9 MB/s 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:28:17.900 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:17.901 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:17.901 16:45:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:28:17.901 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:17.901 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:17.901 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.158 16:45:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:18.416 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 1750843 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 1750843 ']' 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 1750843 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1750843 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1750843' 00:28:18.674 killing process with pid 1750843 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 1750843 00:28:18.674 Received shutdown signal, test time was about 12.941834 seconds 00:28:18.674 00:28:18.674 Latency(us) 00:28:18.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.674 =================================================================================================================== 00:28:18.674 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.674 [2024-07-24 16:45:15.379748] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:18.674 16:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 1750843 00:28:18.932 [2024-07-24 16:45:15.607886] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:28:20.831 00:28:20.831 real 0m19.439s 00:28:20.831 user 0m27.962s 00:28:20.831 sys 0m2.841s 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:20.831 ************************************ 00:28:20.831 END TEST raid_rebuild_test_io 00:28:20.831 ************************************ 00:28:20.831 16:45:17 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:28:20.831 16:45:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:20.831 16:45:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:20.831 16:45:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:20.831 ************************************ 00:28:20.831 START TEST raid_rebuild_test_sb_io 00:28:20.831 ************************************ 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:20.831 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=1754844 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 1754844 /var/tmp/spdk-raid.sock 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 1754844 ']' 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:20.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:20.832 16:45:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:20.832 [2024-07-24 16:45:17.600570] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:28:20.832 [2024-07-24 16:45:17.600692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1754844 ] 00:28:20.832 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:20.832 Zero copy mechanism will not be used. 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:21.090 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:21.090 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:21.090 [2024-07-24 16:45:17.827439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.348 [2024-07-24 16:45:18.110852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.606 [2024-07-24 16:45:18.457192] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:21.606 [2024-07-24 16:45:18.457228] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:21.864 16:45:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:21.864 16:45:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:28:21.864 16:45:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:21.864 16:45:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:22.121 BaseBdev1_malloc 00:28:22.121 16:45:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:22.377 [2024-07-24 16:45:19.135832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:22.377 [2024-07-24 16:45:19.135897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.377 [2024-07-24 16:45:19.135926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:22.377 [2024-07-24 16:45:19.135946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.377 [2024-07-24 16:45:19.138725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.377 [2024-07-24 16:45:19.138766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:22.377 BaseBdev1 00:28:22.377 16:45:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:22.377 16:45:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:22.635 BaseBdev2_malloc 00:28:22.635 16:45:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:22.893 [2024-07-24 16:45:19.648408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:22.893 [2024-07-24 16:45:19.648470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.893 [2024-07-24 16:45:19.648497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:22.893 [2024-07-24 16:45:19.648518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.893 [2024-07-24 16:45:19.651270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.893 [2024-07-24 16:45:19.651309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:22.893 BaseBdev2 00:28:22.893 16:45:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:23.151 spare_malloc 00:28:23.151 16:45:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:23.409 spare_delay 00:28:23.409 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:23.667 [2024-07-24 16:45:20.399490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:23.667 [2024-07-24 16:45:20.399555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.667 [2024-07-24 16:45:20.399583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:28:23.667 [2024-07-24 16:45:20.399601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.667 [2024-07-24 16:45:20.402397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.667 [2024-07-24 16:45:20.402439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:23.667 spare 00:28:23.667 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:23.926 [2024-07-24 16:45:20.624121] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:23.926 [2024-07-24 16:45:20.626463] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:23.926 [2024-07-24 16:45:20.626696] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:28:23.926 [2024-07-24 16:45:20.626722] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:23.926 [2024-07-24 16:45:20.627109] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:28:23.926 [2024-07-24 16:45:20.627369] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:28:23.926 [2024-07-24 16:45:20.627385] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:28:23.926 [2024-07-24 16:45:20.627598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.926 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.184 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.184 "name": "raid_bdev1", 00:28:24.184 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:24.184 "strip_size_kb": 0, 00:28:24.184 "state": "online", 00:28:24.184 "raid_level": "raid1", 00:28:24.184 "superblock": true, 00:28:24.184 "num_base_bdevs": 2, 00:28:24.184 "num_base_bdevs_discovered": 2, 00:28:24.184 "num_base_bdevs_operational": 2, 00:28:24.184 "base_bdevs_list": [ 00:28:24.184 { 00:28:24.184 "name": "BaseBdev1", 00:28:24.184 "uuid": "caa54edf-ffac-5bfc-aaf0-605ee1d17096", 00:28:24.184 "is_configured": true, 00:28:24.184 "data_offset": 2048, 00:28:24.184 "data_size": 63488 00:28:24.184 }, 00:28:24.184 { 00:28:24.184 "name": "BaseBdev2", 00:28:24.184 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:24.184 "is_configured": true, 00:28:24.184 "data_offset": 2048, 00:28:24.184 "data_size": 63488 00:28:24.184 } 00:28:24.184 ] 00:28:24.184 }' 00:28:24.184 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.184 16:45:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:24.748 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:24.748 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:25.007 [2024-07-24 16:45:21.635152] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:25.007 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:28:25.007 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.007 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:25.265 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:28:25.265 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:28:25.265 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:25.265 16:45:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:25.265 [2024-07-24 16:45:22.002362] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:28:25.265 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:25.265 Zero copy mechanism will not be used. 00:28:25.265 Running I/O for 60 seconds... 00:28:25.265 [2024-07-24 16:45:22.098777] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:25.265 [2024-07-24 16:45:22.114379] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.523 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.782 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:25.782 "name": "raid_bdev1", 00:28:25.782 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:25.782 "strip_size_kb": 0, 00:28:25.782 "state": "online", 00:28:25.782 "raid_level": "raid1", 00:28:25.782 "superblock": true, 00:28:25.782 "num_base_bdevs": 2, 00:28:25.782 "num_base_bdevs_discovered": 1, 00:28:25.782 "num_base_bdevs_operational": 1, 00:28:25.782 "base_bdevs_list": [ 00:28:25.782 { 00:28:25.782 "name": null, 00:28:25.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:25.782 "is_configured": false, 00:28:25.782 "data_offset": 2048, 00:28:25.782 "data_size": 63488 00:28:25.782 }, 00:28:25.782 { 00:28:25.782 "name": "BaseBdev2", 00:28:25.782 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:25.782 "is_configured": true, 00:28:25.782 "data_offset": 2048, 00:28:25.782 "data_size": 63488 00:28:25.782 } 00:28:25.782 ] 00:28:25.782 }' 00:28:25.782 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:25.782 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:26.349 16:45:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:26.349 [2024-07-24 16:45:23.166076] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:26.607 16:45:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:26.607 [2024-07-24 16:45:23.238226] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:26.607 [2024-07-24 16:45:23.240607] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:26.607 [2024-07-24 16:45:23.342994] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:26.607 [2024-07-24 16:45:23.343387] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:26.873 [2024-07-24 16:45:23.545866] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:26.873 [2024-07-24 16:45:23.546049] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:27.167 [2024-07-24 16:45:23.791670] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:27.167 [2024-07-24 16:45:23.910361] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:27.425 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.425 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.426 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:27.426 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:27.426 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.426 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.426 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.426 [2024-07-24 16:45:24.267252] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:27.683 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.683 "name": "raid_bdev1", 00:28:27.683 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:27.683 "strip_size_kb": 0, 00:28:27.683 "state": "online", 00:28:27.683 "raid_level": "raid1", 00:28:27.683 "superblock": true, 00:28:27.683 "num_base_bdevs": 2, 00:28:27.683 "num_base_bdevs_discovered": 2, 00:28:27.683 "num_base_bdevs_operational": 2, 00:28:27.683 "process": { 00:28:27.683 "type": "rebuild", 00:28:27.683 "target": "spare", 00:28:27.683 "progress": { 00:28:27.683 "blocks": 16384, 00:28:27.683 "percent": 25 00:28:27.683 } 00:28:27.683 }, 00:28:27.683 "base_bdevs_list": [ 00:28:27.683 { 00:28:27.683 "name": "spare", 00:28:27.683 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:27.683 "is_configured": true, 00:28:27.683 "data_offset": 2048, 00:28:27.683 "data_size": 63488 00:28:27.683 }, 00:28:27.683 { 00:28:27.683 "name": "BaseBdev2", 00:28:27.683 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:27.683 "is_configured": true, 00:28:27.683 "data_offset": 2048, 00:28:27.683 "data_size": 63488 00:28:27.683 } 00:28:27.683 ] 00:28:27.683 }' 00:28:27.683 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.683 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.683 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.941 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.942 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:27.942 [2024-07-24 16:45:24.596352] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:27.942 [2024-07-24 16:45:24.596705] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:27.942 [2024-07-24 16:45:24.750580] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:27.942 [2024-07-24 16:45:24.798901] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:28.200 [2024-07-24 16:45:24.858688] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:28.200 [2024-07-24 16:45:24.868515] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.200 [2024-07-24 16:45:24.868553] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:28.200 [2024-07-24 16:45:24.868567] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:28.200 [2024-07-24 16:45:24.918491] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.200 16:45:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.458 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.458 "name": "raid_bdev1", 00:28:28.458 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:28.458 "strip_size_kb": 0, 00:28:28.458 "state": "online", 00:28:28.458 "raid_level": "raid1", 00:28:28.458 "superblock": true, 00:28:28.458 "num_base_bdevs": 2, 00:28:28.458 "num_base_bdevs_discovered": 1, 00:28:28.458 "num_base_bdevs_operational": 1, 00:28:28.458 "base_bdevs_list": [ 00:28:28.458 { 00:28:28.458 "name": null, 00:28:28.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.458 "is_configured": false, 00:28:28.458 "data_offset": 2048, 00:28:28.458 "data_size": 63488 00:28:28.458 }, 00:28:28.458 { 00:28:28.458 "name": "BaseBdev2", 00:28:28.458 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:28.458 "is_configured": true, 00:28:28.458 "data_offset": 2048, 00:28:28.458 "data_size": 63488 00:28:28.458 } 00:28:28.458 ] 00:28:28.458 }' 00:28:28.458 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.458 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.024 16:45:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.281 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.281 "name": "raid_bdev1", 00:28:29.281 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:29.281 "strip_size_kb": 0, 00:28:29.281 "state": "online", 00:28:29.281 "raid_level": "raid1", 00:28:29.281 "superblock": true, 00:28:29.281 "num_base_bdevs": 2, 00:28:29.281 "num_base_bdevs_discovered": 1, 00:28:29.281 "num_base_bdevs_operational": 1, 00:28:29.281 "base_bdevs_list": [ 00:28:29.281 { 00:28:29.281 "name": null, 00:28:29.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.281 "is_configured": false, 00:28:29.281 "data_offset": 2048, 00:28:29.281 "data_size": 63488 00:28:29.281 }, 00:28:29.281 { 00:28:29.281 "name": "BaseBdev2", 00:28:29.281 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:29.281 "is_configured": true, 00:28:29.281 "data_offset": 2048, 00:28:29.281 "data_size": 63488 00:28:29.281 } 00:28:29.281 ] 00:28:29.281 }' 00:28:29.281 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.281 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:29.281 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.281 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:29.281 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:29.539 [2024-07-24 16:45:26.332771] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:29.539 16:45:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:29.797 [2024-07-24 16:45:26.410528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:28:29.797 [2024-07-24 16:45:26.412874] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:29.797 [2024-07-24 16:45:26.550469] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:30.055 [2024-07-24 16:45:26.669517] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:30.055 [2024-07-24 16:45:26.669768] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:30.312 [2024-07-24 16:45:27.159184] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.570 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.828 [2024-07-24 16:45:27.514887] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:30.828 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.828 "name": "raid_bdev1", 00:28:30.828 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:30.828 "strip_size_kb": 0, 00:28:30.828 "state": "online", 00:28:30.828 "raid_level": "raid1", 00:28:30.828 "superblock": true, 00:28:30.828 "num_base_bdevs": 2, 00:28:30.828 "num_base_bdevs_discovered": 2, 00:28:30.828 "num_base_bdevs_operational": 2, 00:28:30.828 "process": { 00:28:30.828 "type": "rebuild", 00:28:30.828 "target": "spare", 00:28:30.828 "progress": { 00:28:30.828 "blocks": 14336, 00:28:30.828 "percent": 22 00:28:30.828 } 00:28:30.828 }, 00:28:30.828 "base_bdevs_list": [ 00:28:30.828 { 00:28:30.828 "name": "spare", 00:28:30.828 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:30.828 "is_configured": true, 00:28:30.828 "data_offset": 2048, 00:28:30.828 "data_size": 63488 00:28:30.828 }, 00:28:30.828 { 00:28:30.828 "name": "BaseBdev2", 00:28:30.828 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:30.828 "is_configured": true, 00:28:30.828 "data_offset": 2048, 00:28:30.828 "data_size": 63488 00:28:30.828 } 00:28:30.828 ] 00:28:30.828 }' 00:28:30.828 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.828 [2024-07-24 16:45:27.650431] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:30.828 [2024-07-24 16:45:27.650683] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:30.828 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.828 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:31.086 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:31.086 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=931 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.087 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.346 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.346 "name": "raid_bdev1", 00:28:31.346 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:31.346 "strip_size_kb": 0, 00:28:31.346 "state": "online", 00:28:31.346 "raid_level": "raid1", 00:28:31.346 "superblock": true, 00:28:31.346 "num_base_bdevs": 2, 00:28:31.346 "num_base_bdevs_discovered": 2, 00:28:31.346 "num_base_bdevs_operational": 2, 00:28:31.346 "process": { 00:28:31.346 "type": "rebuild", 00:28:31.346 "target": "spare", 00:28:31.346 "progress": { 00:28:31.346 "blocks": 18432, 00:28:31.346 "percent": 29 00:28:31.346 } 00:28:31.346 }, 00:28:31.346 "base_bdevs_list": [ 00:28:31.346 { 00:28:31.346 "name": "spare", 00:28:31.346 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:31.346 "is_configured": true, 00:28:31.346 "data_offset": 2048, 00:28:31.346 "data_size": 63488 00:28:31.346 }, 00:28:31.346 { 00:28:31.346 "name": "BaseBdev2", 00:28:31.346 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:31.346 "is_configured": true, 00:28:31.346 "data_offset": 2048, 00:28:31.346 "data_size": 63488 00:28:31.346 } 00:28:31.346 ] 00:28:31.346 }' 00:28:31.346 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.346 [2024-07-24 16:45:27.966003] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:31.346 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:31.346 16:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.346 16:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:31.346 16:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:31.346 [2024-07-24 16:45:28.085480] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:31.911 [2024-07-24 16:45:28.538255] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:31.911 [2024-07-24 16:45:28.546162] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:32.169 [2024-07-24 16:45:28.883676] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.427 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.427 [2024-07-24 16:45:29.104189] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:32.686 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:32.686 "name": "raid_bdev1", 00:28:32.686 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:32.686 "strip_size_kb": 0, 00:28:32.686 "state": "online", 00:28:32.686 "raid_level": "raid1", 00:28:32.686 "superblock": true, 00:28:32.686 "num_base_bdevs": 2, 00:28:32.686 "num_base_bdevs_discovered": 2, 00:28:32.686 "num_base_bdevs_operational": 2, 00:28:32.686 "process": { 00:28:32.686 "type": "rebuild", 00:28:32.686 "target": "spare", 00:28:32.686 "progress": { 00:28:32.686 "blocks": 36864, 00:28:32.686 "percent": 58 00:28:32.686 } 00:28:32.686 }, 00:28:32.686 "base_bdevs_list": [ 00:28:32.686 { 00:28:32.686 "name": "spare", 00:28:32.686 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:32.686 "is_configured": true, 00:28:32.686 "data_offset": 2048, 00:28:32.686 "data_size": 63488 00:28:32.686 }, 00:28:32.686 { 00:28:32.686 "name": "BaseBdev2", 00:28:32.686 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:32.686 "is_configured": true, 00:28:32.686 "data_offset": 2048, 00:28:32.686 "data_size": 63488 00:28:32.686 } 00:28:32.686 ] 00:28:32.686 }' 00:28:32.686 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:32.686 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:32.686 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:32.686 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:32.686 16:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:32.944 [2024-07-24 16:45:29.714444] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:33.201 [2024-07-24 16:45:29.942280] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:28:33.459 [2024-07-24 16:45:30.161197] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.717 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.717 [2024-07-24 16:45:30.395908] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:33.975 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:33.975 "name": "raid_bdev1", 00:28:33.975 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:33.975 "strip_size_kb": 0, 00:28:33.975 "state": "online", 00:28:33.975 "raid_level": "raid1", 00:28:33.975 "superblock": true, 00:28:33.975 "num_base_bdevs": 2, 00:28:33.975 "num_base_bdevs_discovered": 2, 00:28:33.975 "num_base_bdevs_operational": 2, 00:28:33.975 "process": { 00:28:33.975 "type": "rebuild", 00:28:33.975 "target": "spare", 00:28:33.975 "progress": { 00:28:33.975 "blocks": 53248, 00:28:33.975 "percent": 83 00:28:33.975 } 00:28:33.975 }, 00:28:33.975 "base_bdevs_list": [ 00:28:33.975 { 00:28:33.975 "name": "spare", 00:28:33.975 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:33.975 "is_configured": true, 00:28:33.975 "data_offset": 2048, 00:28:33.975 "data_size": 63488 00:28:33.975 }, 00:28:33.975 { 00:28:33.975 "name": "BaseBdev2", 00:28:33.975 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:33.975 "is_configured": true, 00:28:33.975 "data_offset": 2048, 00:28:33.976 "data_size": 63488 00:28:33.976 } 00:28:33.976 ] 00:28:33.976 }' 00:28:33.976 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:33.976 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:33.976 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:33.976 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:33.976 16:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:34.233 [2024-07-24 16:45:31.071256] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:34.491 [2024-07-24 16:45:31.179484] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:34.491 [2024-07-24 16:45:31.181972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.057 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.315 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.315 "name": "raid_bdev1", 00:28:35.315 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:35.315 "strip_size_kb": 0, 00:28:35.315 "state": "online", 00:28:35.315 "raid_level": "raid1", 00:28:35.315 "superblock": true, 00:28:35.315 "num_base_bdevs": 2, 00:28:35.315 "num_base_bdevs_discovered": 2, 00:28:35.315 "num_base_bdevs_operational": 2, 00:28:35.315 "base_bdevs_list": [ 00:28:35.315 { 00:28:35.315 "name": "spare", 00:28:35.315 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:35.315 "is_configured": true, 00:28:35.315 "data_offset": 2048, 00:28:35.315 "data_size": 63488 00:28:35.315 }, 00:28:35.315 { 00:28:35.315 "name": "BaseBdev2", 00:28:35.315 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:35.316 "is_configured": true, 00:28:35.316 "data_offset": 2048, 00:28:35.316 "data_size": 63488 00:28:35.316 } 00:28:35.316 ] 00:28:35.316 }' 00:28:35.316 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.316 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:35.316 16:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.316 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.574 "name": "raid_bdev1", 00:28:35.574 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:35.574 "strip_size_kb": 0, 00:28:35.574 "state": "online", 00:28:35.574 "raid_level": "raid1", 00:28:35.574 "superblock": true, 00:28:35.574 "num_base_bdevs": 2, 00:28:35.574 "num_base_bdevs_discovered": 2, 00:28:35.574 "num_base_bdevs_operational": 2, 00:28:35.574 "base_bdevs_list": [ 00:28:35.574 { 00:28:35.574 "name": "spare", 00:28:35.574 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:35.574 "is_configured": true, 00:28:35.574 "data_offset": 2048, 00:28:35.574 "data_size": 63488 00:28:35.574 }, 00:28:35.574 { 00:28:35.574 "name": "BaseBdev2", 00:28:35.574 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:35.574 "is_configured": true, 00:28:35.574 "data_offset": 2048, 00:28:35.574 "data_size": 63488 00:28:35.574 } 00:28:35.574 ] 00:28:35.574 }' 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.574 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.832 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.832 "name": "raid_bdev1", 00:28:35.832 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:35.832 "strip_size_kb": 0, 00:28:35.832 "state": "online", 00:28:35.832 "raid_level": "raid1", 00:28:35.832 "superblock": true, 00:28:35.832 "num_base_bdevs": 2, 00:28:35.832 "num_base_bdevs_discovered": 2, 00:28:35.832 "num_base_bdevs_operational": 2, 00:28:35.832 "base_bdevs_list": [ 00:28:35.832 { 00:28:35.832 "name": "spare", 00:28:35.832 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:35.832 "is_configured": true, 00:28:35.832 "data_offset": 2048, 00:28:35.832 "data_size": 63488 00:28:35.832 }, 00:28:35.832 { 00:28:35.832 "name": "BaseBdev2", 00:28:35.832 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:35.832 "is_configured": true, 00:28:35.832 "data_offset": 2048, 00:28:35.832 "data_size": 63488 00:28:35.832 } 00:28:35.832 ] 00:28:35.832 }' 00:28:35.832 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.832 16:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.397 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:36.662 [2024-07-24 16:45:33.371862] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:36.662 [2024-07-24 16:45:33.371898] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:36.662 00:28:36.662 Latency(us) 00:28:36.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.662 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:36.662 raid_bdev1 : 11.44 100.36 301.07 0.00 0.00 13494.01 324.40 118279.37 00:28:36.662 =================================================================================================================== 00:28:36.662 Total : 100.36 301.07 0.00 0.00 13494.01 324.40 118279.37 00:28:36.662 [2024-07-24 16:45:33.502468] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:36.662 [2024-07-24 16:45:33.502511] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:36.662 [2024-07-24 16:45:33.502606] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:36.662 [2024-07-24 16:45:33.502622] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:28:36.662 0 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:36.918 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:37.175 /dev/nbd0 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:37.175 1+0 records in 00:28:37.175 1+0 records out 00:28:37.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266259 s, 15.4 MB/s 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:37.175 16:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:37.432 /dev/nbd1 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:37.432 1+0 records in 00:28:37.432 1+0 records out 00:28:37.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285582 s, 14.3 MB/s 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:37.432 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.689 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:37.948 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:38.206 16:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:38.464 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:38.722 [2024-07-24 16:45:35.421042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:38.722 [2024-07-24 16:45:35.421100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:38.722 [2024-07-24 16:45:35.421129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:28:38.722 [2024-07-24 16:45:35.421153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:38.722 [2024-07-24 16:45:35.423920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:38.722 [2024-07-24 16:45:35.423955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:38.722 [2024-07-24 16:45:35.424057] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:38.722 [2024-07-24 16:45:35.424123] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:38.722 [2024-07-24 16:45:35.424331] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:38.722 spare 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.722 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.722 [2024-07-24 16:45:35.524672] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043b80 00:28:38.722 [2024-07-24 16:45:35.524709] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:38.722 [2024-07-24 16:45:35.525041] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001f930 00:28:38.722 [2024-07-24 16:45:35.525337] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043b80 00:28:38.722 [2024-07-24 16:45:35.525354] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043b80 00:28:38.722 [2024-07-24 16:45:35.525562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.979 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.979 "name": "raid_bdev1", 00:28:38.979 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:38.979 "strip_size_kb": 0, 00:28:38.979 "state": "online", 00:28:38.979 "raid_level": "raid1", 00:28:38.979 "superblock": true, 00:28:38.979 "num_base_bdevs": 2, 00:28:38.979 "num_base_bdevs_discovered": 2, 00:28:38.979 "num_base_bdevs_operational": 2, 00:28:38.979 "base_bdevs_list": [ 00:28:38.979 { 00:28:38.979 "name": "spare", 00:28:38.979 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:38.979 "is_configured": true, 00:28:38.979 "data_offset": 2048, 00:28:38.979 "data_size": 63488 00:28:38.979 }, 00:28:38.979 { 00:28:38.979 "name": "BaseBdev2", 00:28:38.979 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:38.979 "is_configured": true, 00:28:38.979 "data_offset": 2048, 00:28:38.979 "data_size": 63488 00:28:38.979 } 00:28:38.979 ] 00:28:38.979 }' 00:28:38.979 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.979 16:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.548 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.827 "name": "raid_bdev1", 00:28:39.827 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:39.827 "strip_size_kb": 0, 00:28:39.827 "state": "online", 00:28:39.827 "raid_level": "raid1", 00:28:39.827 "superblock": true, 00:28:39.827 "num_base_bdevs": 2, 00:28:39.827 "num_base_bdevs_discovered": 2, 00:28:39.827 "num_base_bdevs_operational": 2, 00:28:39.827 "base_bdevs_list": [ 00:28:39.827 { 00:28:39.827 "name": "spare", 00:28:39.827 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:39.827 "is_configured": true, 00:28:39.827 "data_offset": 2048, 00:28:39.827 "data_size": 63488 00:28:39.827 }, 00:28:39.827 { 00:28:39.827 "name": "BaseBdev2", 00:28:39.827 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:39.827 "is_configured": true, 00:28:39.827 "data_offset": 2048, 00:28:39.827 "data_size": 63488 00:28:39.827 } 00:28:39.827 ] 00:28:39.827 }' 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.827 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:40.100 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:40.100 16:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:40.358 [2024-07-24 16:45:37.013849] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.358 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.616 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.616 "name": "raid_bdev1", 00:28:40.616 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:40.616 "strip_size_kb": 0, 00:28:40.616 "state": "online", 00:28:40.616 "raid_level": "raid1", 00:28:40.616 "superblock": true, 00:28:40.616 "num_base_bdevs": 2, 00:28:40.616 "num_base_bdevs_discovered": 1, 00:28:40.616 "num_base_bdevs_operational": 1, 00:28:40.616 "base_bdevs_list": [ 00:28:40.616 { 00:28:40.616 "name": null, 00:28:40.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.616 "is_configured": false, 00:28:40.616 "data_offset": 2048, 00:28:40.616 "data_size": 63488 00:28:40.616 }, 00:28:40.616 { 00:28:40.616 "name": "BaseBdev2", 00:28:40.616 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:40.616 "is_configured": true, 00:28:40.616 "data_offset": 2048, 00:28:40.616 "data_size": 63488 00:28:40.616 } 00:28:40.616 ] 00:28:40.616 }' 00:28:40.616 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.616 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:41.181 16:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:41.181 [2024-07-24 16:45:38.020740] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:41.181 [2024-07-24 16:45:38.020934] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:41.181 [2024-07-24 16:45:38.020961] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:41.181 [2024-07-24 16:45:38.021002] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:41.438 [2024-07-24 16:45:38.047079] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fa00 00:28:41.438 [2024-07-24 16:45:38.049414] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:41.438 16:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.371 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.629 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.629 "name": "raid_bdev1", 00:28:42.629 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:42.629 "strip_size_kb": 0, 00:28:42.629 "state": "online", 00:28:42.629 "raid_level": "raid1", 00:28:42.629 "superblock": true, 00:28:42.629 "num_base_bdevs": 2, 00:28:42.629 "num_base_bdevs_discovered": 2, 00:28:42.629 "num_base_bdevs_operational": 2, 00:28:42.629 "process": { 00:28:42.629 "type": "rebuild", 00:28:42.629 "target": "spare", 00:28:42.629 "progress": { 00:28:42.629 "blocks": 24576, 00:28:42.629 "percent": 38 00:28:42.629 } 00:28:42.629 }, 00:28:42.629 "base_bdevs_list": [ 00:28:42.629 { 00:28:42.629 "name": "spare", 00:28:42.629 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:42.629 "is_configured": true, 00:28:42.629 "data_offset": 2048, 00:28:42.629 "data_size": 63488 00:28:42.629 }, 00:28:42.629 { 00:28:42.629 "name": "BaseBdev2", 00:28:42.629 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:42.629 "is_configured": true, 00:28:42.629 "data_offset": 2048, 00:28:42.629 "data_size": 63488 00:28:42.629 } 00:28:42.629 ] 00:28:42.629 }' 00:28:42.629 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.629 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.629 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.629 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.629 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:42.887 [2024-07-24 16:45:39.603007] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.887 [2024-07-24 16:45:39.662549] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:42.887 [2024-07-24 16:45:39.662619] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.887 [2024-07-24 16:45:39.662640] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.887 [2024-07-24 16:45:39.662659] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.887 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.145 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:43.145 "name": "raid_bdev1", 00:28:43.145 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:43.145 "strip_size_kb": 0, 00:28:43.145 "state": "online", 00:28:43.145 "raid_level": "raid1", 00:28:43.145 "superblock": true, 00:28:43.145 "num_base_bdevs": 2, 00:28:43.145 "num_base_bdevs_discovered": 1, 00:28:43.145 "num_base_bdevs_operational": 1, 00:28:43.145 "base_bdevs_list": [ 00:28:43.145 { 00:28:43.145 "name": null, 00:28:43.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.145 "is_configured": false, 00:28:43.145 "data_offset": 2048, 00:28:43.145 "data_size": 63488 00:28:43.145 }, 00:28:43.145 { 00:28:43.145 "name": "BaseBdev2", 00:28:43.145 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:43.145 "is_configured": true, 00:28:43.145 "data_offset": 2048, 00:28:43.145 "data_size": 63488 00:28:43.145 } 00:28:43.145 ] 00:28:43.145 }' 00:28:43.145 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:43.145 16:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:43.709 16:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:43.967 [2024-07-24 16:45:40.764066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:43.967 [2024-07-24 16:45:40.764158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:43.967 [2024-07-24 16:45:40.764187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:28:43.967 [2024-07-24 16:45:40.764206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:43.967 [2024-07-24 16:45:40.764805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:43.967 [2024-07-24 16:45:40.764837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:43.967 [2024-07-24 16:45:40.764946] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:43.967 [2024-07-24 16:45:40.764967] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:43.967 [2024-07-24 16:45:40.764983] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:43.967 [2024-07-24 16:45:40.765012] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:43.967 [2024-07-24 16:45:40.790670] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fad0 00:28:43.967 spare 00:28:43.967 [2024-07-24 16:45:40.792934] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:43.967 16:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.340 16:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.340 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.340 "name": "raid_bdev1", 00:28:45.340 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:45.340 "strip_size_kb": 0, 00:28:45.340 "state": "online", 00:28:45.340 "raid_level": "raid1", 00:28:45.340 "superblock": true, 00:28:45.340 "num_base_bdevs": 2, 00:28:45.340 "num_base_bdevs_discovered": 2, 00:28:45.340 "num_base_bdevs_operational": 2, 00:28:45.340 "process": { 00:28:45.340 "type": "rebuild", 00:28:45.340 "target": "spare", 00:28:45.340 "progress": { 00:28:45.340 "blocks": 24576, 00:28:45.340 "percent": 38 00:28:45.340 } 00:28:45.340 }, 00:28:45.340 "base_bdevs_list": [ 00:28:45.340 { 00:28:45.340 "name": "spare", 00:28:45.340 "uuid": "2cf0a810-7383-5763-922e-5a449531428b", 00:28:45.340 "is_configured": true, 00:28:45.340 "data_offset": 2048, 00:28:45.340 "data_size": 63488 00:28:45.340 }, 00:28:45.340 { 00:28:45.340 "name": "BaseBdev2", 00:28:45.340 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:45.340 "is_configured": true, 00:28:45.340 "data_offset": 2048, 00:28:45.340 "data_size": 63488 00:28:45.340 } 00:28:45.340 ] 00:28:45.340 }' 00:28:45.340 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:45.340 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.340 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.340 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.340 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:45.597 [2024-07-24 16:45:42.342579] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:45.597 [2024-07-24 16:45:42.406101] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:45.597 [2024-07-24 16:45:42.406170] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.597 [2024-07-24 16:45:42.406198] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:45.597 [2024-07-24 16:45:42.406211] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.855 "name": "raid_bdev1", 00:28:45.855 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:45.855 "strip_size_kb": 0, 00:28:45.855 "state": "online", 00:28:45.855 "raid_level": "raid1", 00:28:45.855 "superblock": true, 00:28:45.855 "num_base_bdevs": 2, 00:28:45.855 "num_base_bdevs_discovered": 1, 00:28:45.855 "num_base_bdevs_operational": 1, 00:28:45.855 "base_bdevs_list": [ 00:28:45.855 { 00:28:45.855 "name": null, 00:28:45.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:45.855 "is_configured": false, 00:28:45.855 "data_offset": 2048, 00:28:45.855 "data_size": 63488 00:28:45.855 }, 00:28:45.855 { 00:28:45.855 "name": "BaseBdev2", 00:28:45.855 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:45.855 "is_configured": true, 00:28:45.855 "data_offset": 2048, 00:28:45.855 "data_size": 63488 00:28:45.855 } 00:28:45.855 ] 00:28:45.855 }' 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.855 16:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.421 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.677 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.677 "name": "raid_bdev1", 00:28:46.677 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:46.677 "strip_size_kb": 0, 00:28:46.677 "state": "online", 00:28:46.677 "raid_level": "raid1", 00:28:46.677 "superblock": true, 00:28:46.677 "num_base_bdevs": 2, 00:28:46.677 "num_base_bdevs_discovered": 1, 00:28:46.677 "num_base_bdevs_operational": 1, 00:28:46.677 "base_bdevs_list": [ 00:28:46.677 { 00:28:46.677 "name": null, 00:28:46.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:46.677 "is_configured": false, 00:28:46.677 "data_offset": 2048, 00:28:46.677 "data_size": 63488 00:28:46.677 }, 00:28:46.677 { 00:28:46.677 "name": "BaseBdev2", 00:28:46.677 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:46.677 "is_configured": true, 00:28:46.677 "data_offset": 2048, 00:28:46.677 "data_size": 63488 00:28:46.677 } 00:28:46.677 ] 00:28:46.677 }' 00:28:46.677 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.677 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:46.677 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.935 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:46.935 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:46.935 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:47.192 [2024-07-24 16:45:43.944381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:47.192 [2024-07-24 16:45:43.944440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:47.192 [2024-07-24 16:45:43.944469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044780 00:28:47.193 [2024-07-24 16:45:43.944484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:47.193 [2024-07-24 16:45:43.945058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:47.193 [2024-07-24 16:45:43.945084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:47.193 [2024-07-24 16:45:43.945186] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:47.193 [2024-07-24 16:45:43.945206] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:47.193 [2024-07-24 16:45:43.945222] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:47.193 BaseBdev1 00:28:47.193 16:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.126 16:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.383 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.384 "name": "raid_bdev1", 00:28:48.384 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:48.384 "strip_size_kb": 0, 00:28:48.384 "state": "online", 00:28:48.384 "raid_level": "raid1", 00:28:48.384 "superblock": true, 00:28:48.384 "num_base_bdevs": 2, 00:28:48.384 "num_base_bdevs_discovered": 1, 00:28:48.384 "num_base_bdevs_operational": 1, 00:28:48.384 "base_bdevs_list": [ 00:28:48.384 { 00:28:48.384 "name": null, 00:28:48.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.384 "is_configured": false, 00:28:48.384 "data_offset": 2048, 00:28:48.384 "data_size": 63488 00:28:48.384 }, 00:28:48.384 { 00:28:48.384 "name": "BaseBdev2", 00:28:48.384 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:48.384 "is_configured": true, 00:28:48.384 "data_offset": 2048, 00:28:48.384 "data_size": 63488 00:28:48.384 } 00:28:48.384 ] 00:28:48.384 }' 00:28:48.384 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.384 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.949 16:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.207 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.207 "name": "raid_bdev1", 00:28:49.207 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:49.207 "strip_size_kb": 0, 00:28:49.207 "state": "online", 00:28:49.207 "raid_level": "raid1", 00:28:49.207 "superblock": true, 00:28:49.207 "num_base_bdevs": 2, 00:28:49.207 "num_base_bdevs_discovered": 1, 00:28:49.207 "num_base_bdevs_operational": 1, 00:28:49.207 "base_bdevs_list": [ 00:28:49.207 { 00:28:49.207 "name": null, 00:28:49.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.207 "is_configured": false, 00:28:49.207 "data_offset": 2048, 00:28:49.207 "data_size": 63488 00:28:49.207 }, 00:28:49.207 { 00:28:49.207 "name": "BaseBdev2", 00:28:49.207 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:49.207 "is_configured": true, 00:28:49.207 "data_offset": 2048, 00:28:49.207 "data_size": 63488 00:28:49.207 } 00:28:49.207 ] 00:28:49.207 }' 00:28:49.207 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.207 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:49.207 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:49.465 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:49.465 [2024-07-24 16:45:46.315191] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:49.465 [2024-07-24 16:45:46.315355] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:49.465 [2024-07-24 16:45:46.315375] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:49.465 request: 00:28:49.465 { 00:28:49.465 "base_bdev": "BaseBdev1", 00:28:49.465 "raid_bdev": "raid_bdev1", 00:28:49.465 "method": "bdev_raid_add_base_bdev", 00:28:49.465 "req_id": 1 00:28:49.465 } 00:28:49.465 Got JSON-RPC error response 00:28:49.465 response: 00:28:49.465 { 00:28:49.465 "code": -22, 00:28:49.465 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:49.465 } 00:28:49.723 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:28:49.723 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:49.723 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:49.723 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:49.723 16:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.655 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.913 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.913 "name": "raid_bdev1", 00:28:50.913 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:50.913 "strip_size_kb": 0, 00:28:50.913 "state": "online", 00:28:50.913 "raid_level": "raid1", 00:28:50.913 "superblock": true, 00:28:50.913 "num_base_bdevs": 2, 00:28:50.913 "num_base_bdevs_discovered": 1, 00:28:50.913 "num_base_bdevs_operational": 1, 00:28:50.913 "base_bdevs_list": [ 00:28:50.913 { 00:28:50.913 "name": null, 00:28:50.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.913 "is_configured": false, 00:28:50.913 "data_offset": 2048, 00:28:50.913 "data_size": 63488 00:28:50.913 }, 00:28:50.913 { 00:28:50.913 "name": "BaseBdev2", 00:28:50.913 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:50.913 "is_configured": true, 00:28:50.913 "data_offset": 2048, 00:28:50.913 "data_size": 63488 00:28:50.913 } 00:28:50.913 ] 00:28:50.913 }' 00:28:50.913 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.913 16:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.479 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.737 "name": "raid_bdev1", 00:28:51.737 "uuid": "5d0729c3-9e0a-4c00-b204-e00a518be544", 00:28:51.737 "strip_size_kb": 0, 00:28:51.737 "state": "online", 00:28:51.737 "raid_level": "raid1", 00:28:51.737 "superblock": true, 00:28:51.737 "num_base_bdevs": 2, 00:28:51.737 "num_base_bdevs_discovered": 1, 00:28:51.737 "num_base_bdevs_operational": 1, 00:28:51.737 "base_bdevs_list": [ 00:28:51.737 { 00:28:51.737 "name": null, 00:28:51.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.737 "is_configured": false, 00:28:51.737 "data_offset": 2048, 00:28:51.737 "data_size": 63488 00:28:51.737 }, 00:28:51.737 { 00:28:51.737 "name": "BaseBdev2", 00:28:51.737 "uuid": "eae2bdd0-1b76-5b92-8181-566b893ebb4f", 00:28:51.737 "is_configured": true, 00:28:51.737 "data_offset": 2048, 00:28:51.737 "data_size": 63488 00:28:51.737 } 00:28:51.737 ] 00:28:51.737 }' 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 1754844 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 1754844 ']' 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 1754844 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1754844 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1754844' 00:28:51.737 killing process with pid 1754844 00:28:51.737 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 1754844 00:28:51.737 Received shutdown signal, test time was about 26.454421 seconds 00:28:51.737 00:28:51.737 Latency(us) 00:28:51.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.737 =================================================================================================================== 00:28:51.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.738 [2024-07-24 16:45:48.523772] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:51.738 [2024-07-24 16:45:48.523908] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:51.738 16:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 1754844 00:28:51.738 [2024-07-24 16:45:48.523976] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:51.738 [2024-07-24 16:45:48.523991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043b80 name raid_bdev1, state offline 00:28:52.004 [2024-07-24 16:45:48.747310] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:28:53.905 00:28:53.905 real 0m32.982s 00:28:53.905 user 0m49.562s 00:28:53.905 sys 0m4.642s 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:53.905 ************************************ 00:28:53.905 END TEST raid_rebuild_test_sb_io 00:28:53.905 ************************************ 00:28:53.905 16:45:50 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:28:53.905 16:45:50 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:28:53.905 16:45:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:53.905 16:45:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.905 16:45:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:53.905 ************************************ 00:28:53.905 START TEST raid_rebuild_test 00:28:53.905 ************************************ 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=1760601 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 1760601 /var/tmp/spdk-raid.sock 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 1760601 ']' 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:53.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.905 16:45:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:53.905 [2024-07-24 16:45:50.670658] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:28:53.905 [2024-07-24 16:45:50.670781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1760601 ] 00:28:53.905 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:53.905 Zero copy mechanism will not be used. 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.183 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.183 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.184 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.184 [2024-07-24 16:45:50.899833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.454 [2024-07-24 16:45:51.168248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.712 [2024-07-24 16:45:51.496683] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:54.712 [2024-07-24 16:45:51.496721] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:54.970 16:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.970 16:45:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:28:54.970 16:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:54.970 16:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:55.228 BaseBdev1_malloc 00:28:55.228 16:45:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:55.486 [2024-07-24 16:45:52.176630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:55.486 [2024-07-24 16:45:52.176696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.486 [2024-07-24 16:45:52.176729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:55.486 [2024-07-24 16:45:52.176749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.486 [2024-07-24 16:45:52.179517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.486 [2024-07-24 16:45:52.179556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:55.486 BaseBdev1 00:28:55.486 16:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:55.486 16:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:55.744 BaseBdev2_malloc 00:28:55.744 16:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:56.002 [2024-07-24 16:45:52.691036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:56.002 [2024-07-24 16:45:52.691101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:56.002 [2024-07-24 16:45:52.691131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:56.002 [2024-07-24 16:45:52.691162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:56.002 [2024-07-24 16:45:52.693943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:56.002 [2024-07-24 16:45:52.693982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:56.002 BaseBdev2 00:28:56.002 16:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:56.002 16:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:56.259 BaseBdev3_malloc 00:28:56.259 16:45:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:56.517 [2024-07-24 16:45:53.197839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:56.517 [2024-07-24 16:45:53.197913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:56.517 [2024-07-24 16:45:53.197945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:28:56.517 [2024-07-24 16:45:53.197964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:56.517 [2024-07-24 16:45:53.200688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:56.517 [2024-07-24 16:45:53.200731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:56.517 BaseBdev3 00:28:56.517 16:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:56.517 16:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:56.773 BaseBdev4_malloc 00:28:56.773 16:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:57.030 [2024-07-24 16:45:53.711852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:57.030 [2024-07-24 16:45:53.711921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.030 [2024-07-24 16:45:53.711952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:28:57.030 [2024-07-24 16:45:53.711971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.030 [2024-07-24 16:45:53.714707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.030 [2024-07-24 16:45:53.714744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:57.030 BaseBdev4 00:28:57.030 16:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:57.288 spare_malloc 00:28:57.288 16:45:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:57.545 spare_delay 00:28:57.545 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:57.802 [2024-07-24 16:45:54.428566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:57.802 [2024-07-24 16:45:54.428632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:57.802 [2024-07-24 16:45:54.428662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:28:57.802 [2024-07-24 16:45:54.428680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:57.802 [2024-07-24 16:45:54.431443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:57.802 [2024-07-24 16:45:54.431481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:57.802 spare 00:28:57.802 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:57.802 [2024-07-24 16:45:54.645186] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:57.802 [2024-07-24 16:45:54.647502] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:57.802 [2024-07-24 16:45:54.647576] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:57.802 [2024-07-24 16:45:54.647642] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:57.802 [2024-07-24 16:45:54.647745] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:28:57.802 [2024-07-24 16:45:54.647763] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:57.802 [2024-07-24 16:45:54.648156] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:28:57.802 [2024-07-24 16:45:54.648422] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:28:57.802 [2024-07-24 16:45:54.648443] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:28:57.802 [2024-07-24 16:45:54.648679] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.060 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:58.060 "name": "raid_bdev1", 00:28:58.060 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:28:58.060 "strip_size_kb": 0, 00:28:58.060 "state": "online", 00:28:58.060 "raid_level": "raid1", 00:28:58.060 "superblock": false, 00:28:58.060 "num_base_bdevs": 4, 00:28:58.060 "num_base_bdevs_discovered": 4, 00:28:58.061 "num_base_bdevs_operational": 4, 00:28:58.061 "base_bdevs_list": [ 00:28:58.061 { 00:28:58.061 "name": "BaseBdev1", 00:28:58.061 "uuid": "9fef74b4-1e93-530a-9d69-897c09613c26", 00:28:58.061 "is_configured": true, 00:28:58.061 "data_offset": 0, 00:28:58.061 "data_size": 65536 00:28:58.061 }, 00:28:58.061 { 00:28:58.061 "name": "BaseBdev2", 00:28:58.061 "uuid": "2b0571d9-ef0b-5fcd-bbb4-247824773bfb", 00:28:58.061 "is_configured": true, 00:28:58.061 "data_offset": 0, 00:28:58.061 "data_size": 65536 00:28:58.061 }, 00:28:58.061 { 00:28:58.061 "name": "BaseBdev3", 00:28:58.061 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:28:58.061 "is_configured": true, 00:28:58.061 "data_offset": 0, 00:28:58.061 "data_size": 65536 00:28:58.061 }, 00:28:58.061 { 00:28:58.061 "name": "BaseBdev4", 00:28:58.061 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:28:58.061 "is_configured": true, 00:28:58.061 "data_offset": 0, 00:28:58.061 "data_size": 65536 00:28:58.061 } 00:28:58.061 ] 00:28:58.061 }' 00:28:58.061 16:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:58.061 16:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.626 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:58.626 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:58.883 [2024-07-24 16:45:55.668320] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:58.883 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:28:58.883 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.883 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:59.140 16:45:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:59.398 [2024-07-24 16:45:56.133248] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:59.398 /dev/nbd0 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:59.398 1+0 records in 00:28:59.398 1+0 records out 00:28:59.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254175 s, 16.1 MB/s 00:28:59.398 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:28:59.399 16:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:07.507 65536+0 records in 00:29:07.507 65536+0 records out 00:29:07.507 33554432 bytes (34 MB, 32 MiB) copied, 7.26736 s, 4.6 MB/s 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:07.507 [2024-07-24 16:46:03.717349] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:07.507 [2024-07-24 16:46:03.926029] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.507 16:46:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.507 16:46:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:07.507 "name": "raid_bdev1", 00:29:07.507 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:07.507 "strip_size_kb": 0, 00:29:07.507 "state": "online", 00:29:07.507 "raid_level": "raid1", 00:29:07.507 "superblock": false, 00:29:07.507 "num_base_bdevs": 4, 00:29:07.507 "num_base_bdevs_discovered": 3, 00:29:07.507 "num_base_bdevs_operational": 3, 00:29:07.507 "base_bdevs_list": [ 00:29:07.507 { 00:29:07.507 "name": null, 00:29:07.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.507 "is_configured": false, 00:29:07.507 "data_offset": 0, 00:29:07.507 "data_size": 65536 00:29:07.507 }, 00:29:07.507 { 00:29:07.507 "name": "BaseBdev2", 00:29:07.507 "uuid": "2b0571d9-ef0b-5fcd-bbb4-247824773bfb", 00:29:07.507 "is_configured": true, 00:29:07.507 "data_offset": 0, 00:29:07.507 "data_size": 65536 00:29:07.507 }, 00:29:07.507 { 00:29:07.507 "name": "BaseBdev3", 00:29:07.507 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:07.507 "is_configured": true, 00:29:07.507 "data_offset": 0, 00:29:07.507 "data_size": 65536 00:29:07.507 }, 00:29:07.507 { 00:29:07.507 "name": "BaseBdev4", 00:29:07.507 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:07.507 "is_configured": true, 00:29:07.507 "data_offset": 0, 00:29:07.507 "data_size": 65536 00:29:07.507 } 00:29:07.507 ] 00:29:07.507 }' 00:29:07.507 16:46:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:07.507 16:46:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.075 16:46:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:08.334 [2024-07-24 16:46:04.948800] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:08.334 [2024-07-24 16:46:04.975065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d145a0 00:29:08.334 [2024-07-24 16:46:04.977434] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:08.334 16:46:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:09.270 16:46:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:09.270 16:46:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:09.270 16:46:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:09.270 16:46:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:09.270 16:46:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:09.270 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.270 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.529 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:09.529 "name": "raid_bdev1", 00:29:09.529 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:09.529 "strip_size_kb": 0, 00:29:09.529 "state": "online", 00:29:09.529 "raid_level": "raid1", 00:29:09.529 "superblock": false, 00:29:09.529 "num_base_bdevs": 4, 00:29:09.529 "num_base_bdevs_discovered": 4, 00:29:09.529 "num_base_bdevs_operational": 4, 00:29:09.529 "process": { 00:29:09.529 "type": "rebuild", 00:29:09.529 "target": "spare", 00:29:09.529 "progress": { 00:29:09.529 "blocks": 24576, 00:29:09.529 "percent": 37 00:29:09.529 } 00:29:09.529 }, 00:29:09.529 "base_bdevs_list": [ 00:29:09.529 { 00:29:09.529 "name": "spare", 00:29:09.529 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:09.529 "is_configured": true, 00:29:09.529 "data_offset": 0, 00:29:09.529 "data_size": 65536 00:29:09.529 }, 00:29:09.529 { 00:29:09.529 "name": "BaseBdev2", 00:29:09.529 "uuid": "2b0571d9-ef0b-5fcd-bbb4-247824773bfb", 00:29:09.529 "is_configured": true, 00:29:09.529 "data_offset": 0, 00:29:09.529 "data_size": 65536 00:29:09.529 }, 00:29:09.529 { 00:29:09.529 "name": "BaseBdev3", 00:29:09.529 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:09.529 "is_configured": true, 00:29:09.529 "data_offset": 0, 00:29:09.529 "data_size": 65536 00:29:09.529 }, 00:29:09.529 { 00:29:09.529 "name": "BaseBdev4", 00:29:09.529 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:09.529 "is_configured": true, 00:29:09.529 "data_offset": 0, 00:29:09.529 "data_size": 65536 00:29:09.529 } 00:29:09.529 ] 00:29:09.529 }' 00:29:09.529 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:09.529 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:09.529 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:09.529 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:09.529 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:09.788 [2024-07-24 16:46:06.530805] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:09.788 [2024-07-24 16:46:06.590397] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:09.788 [2024-07-24 16:46:06.590462] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:09.788 [2024-07-24 16:46:06.590486] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:09.788 [2024-07-24 16:46:06.590501] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.788 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.047 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:10.047 "name": "raid_bdev1", 00:29:10.047 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:10.047 "strip_size_kb": 0, 00:29:10.047 "state": "online", 00:29:10.047 "raid_level": "raid1", 00:29:10.047 "superblock": false, 00:29:10.047 "num_base_bdevs": 4, 00:29:10.047 "num_base_bdevs_discovered": 3, 00:29:10.047 "num_base_bdevs_operational": 3, 00:29:10.047 "base_bdevs_list": [ 00:29:10.047 { 00:29:10.047 "name": null, 00:29:10.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.047 "is_configured": false, 00:29:10.047 "data_offset": 0, 00:29:10.047 "data_size": 65536 00:29:10.047 }, 00:29:10.047 { 00:29:10.047 "name": "BaseBdev2", 00:29:10.047 "uuid": "2b0571d9-ef0b-5fcd-bbb4-247824773bfb", 00:29:10.047 "is_configured": true, 00:29:10.047 "data_offset": 0, 00:29:10.047 "data_size": 65536 00:29:10.047 }, 00:29:10.047 { 00:29:10.047 "name": "BaseBdev3", 00:29:10.047 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:10.047 "is_configured": true, 00:29:10.047 "data_offset": 0, 00:29:10.047 "data_size": 65536 00:29:10.047 }, 00:29:10.047 { 00:29:10.047 "name": "BaseBdev4", 00:29:10.047 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:10.047 "is_configured": true, 00:29:10.047 "data_offset": 0, 00:29:10.047 "data_size": 65536 00:29:10.047 } 00:29:10.047 ] 00:29:10.047 }' 00:29:10.047 16:46:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:10.047 16:46:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.614 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.873 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.873 "name": "raid_bdev1", 00:29:10.873 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:10.873 "strip_size_kb": 0, 00:29:10.873 "state": "online", 00:29:10.873 "raid_level": "raid1", 00:29:10.873 "superblock": false, 00:29:10.873 "num_base_bdevs": 4, 00:29:10.873 "num_base_bdevs_discovered": 3, 00:29:10.873 "num_base_bdevs_operational": 3, 00:29:10.873 "base_bdevs_list": [ 00:29:10.873 { 00:29:10.873 "name": null, 00:29:10.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.873 "is_configured": false, 00:29:10.873 "data_offset": 0, 00:29:10.873 "data_size": 65536 00:29:10.873 }, 00:29:10.873 { 00:29:10.873 "name": "BaseBdev2", 00:29:10.873 "uuid": "2b0571d9-ef0b-5fcd-bbb4-247824773bfb", 00:29:10.873 "is_configured": true, 00:29:10.873 "data_offset": 0, 00:29:10.873 "data_size": 65536 00:29:10.873 }, 00:29:10.873 { 00:29:10.873 "name": "BaseBdev3", 00:29:10.873 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:10.873 "is_configured": true, 00:29:10.873 "data_offset": 0, 00:29:10.873 "data_size": 65536 00:29:10.873 }, 00:29:10.873 { 00:29:10.873 "name": "BaseBdev4", 00:29:10.873 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:10.873 "is_configured": true, 00:29:10.873 "data_offset": 0, 00:29:10.873 "data_size": 65536 00:29:10.873 } 00:29:10.873 ] 00:29:10.873 }' 00:29:10.873 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:10.873 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:10.873 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:11.132 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:11.132 16:46:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:11.132 [2024-07-24 16:46:07.966675] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:11.132 [2024-07-24 16:46:07.989414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14670 00:29:11.132 [2024-07-24 16:46:07.991770] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:11.391 16:46:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.363 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.624 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.624 "name": "raid_bdev1", 00:29:12.624 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:12.624 "strip_size_kb": 0, 00:29:12.624 "state": "online", 00:29:12.624 "raid_level": "raid1", 00:29:12.624 "superblock": false, 00:29:12.624 "num_base_bdevs": 4, 00:29:12.624 "num_base_bdevs_discovered": 4, 00:29:12.624 "num_base_bdevs_operational": 4, 00:29:12.624 "process": { 00:29:12.624 "type": "rebuild", 00:29:12.624 "target": "spare", 00:29:12.624 "progress": { 00:29:12.624 "blocks": 24576, 00:29:12.624 "percent": 37 00:29:12.624 } 00:29:12.624 }, 00:29:12.624 "base_bdevs_list": [ 00:29:12.624 { 00:29:12.624 "name": "spare", 00:29:12.624 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:12.624 "is_configured": true, 00:29:12.624 "data_offset": 0, 00:29:12.624 "data_size": 65536 00:29:12.624 }, 00:29:12.624 { 00:29:12.624 "name": "BaseBdev2", 00:29:12.624 "uuid": "2b0571d9-ef0b-5fcd-bbb4-247824773bfb", 00:29:12.624 "is_configured": true, 00:29:12.624 "data_offset": 0, 00:29:12.624 "data_size": 65536 00:29:12.624 }, 00:29:12.624 { 00:29:12.624 "name": "BaseBdev3", 00:29:12.624 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:12.624 "is_configured": true, 00:29:12.624 "data_offset": 0, 00:29:12.624 "data_size": 65536 00:29:12.624 }, 00:29:12.624 { 00:29:12.624 "name": "BaseBdev4", 00:29:12.624 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:12.624 "is_configured": true, 00:29:12.624 "data_offset": 0, 00:29:12.624 "data_size": 65536 00:29:12.624 } 00:29:12.624 ] 00:29:12.624 }' 00:29:12.624 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.624 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:29:12.625 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:12.883 [2024-07-24 16:46:09.540673] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:12.883 [2024-07-24 16:46:09.604682] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d14670 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.884 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:13.142 "name": "raid_bdev1", 00:29:13.142 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:13.142 "strip_size_kb": 0, 00:29:13.142 "state": "online", 00:29:13.142 "raid_level": "raid1", 00:29:13.142 "superblock": false, 00:29:13.142 "num_base_bdevs": 4, 00:29:13.142 "num_base_bdevs_discovered": 3, 00:29:13.142 "num_base_bdevs_operational": 3, 00:29:13.142 "process": { 00:29:13.142 "type": "rebuild", 00:29:13.142 "target": "spare", 00:29:13.142 "progress": { 00:29:13.142 "blocks": 36864, 00:29:13.142 "percent": 56 00:29:13.142 } 00:29:13.142 }, 00:29:13.142 "base_bdevs_list": [ 00:29:13.142 { 00:29:13.142 "name": "spare", 00:29:13.142 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:13.142 "is_configured": true, 00:29:13.142 "data_offset": 0, 00:29:13.142 "data_size": 65536 00:29:13.142 }, 00:29:13.142 { 00:29:13.142 "name": null, 00:29:13.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.142 "is_configured": false, 00:29:13.142 "data_offset": 0, 00:29:13.142 "data_size": 65536 00:29:13.142 }, 00:29:13.142 { 00:29:13.142 "name": "BaseBdev3", 00:29:13.142 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:13.142 "is_configured": true, 00:29:13.142 "data_offset": 0, 00:29:13.142 "data_size": 65536 00:29:13.142 }, 00:29:13.142 { 00:29:13.142 "name": "BaseBdev4", 00:29:13.142 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:13.142 "is_configured": true, 00:29:13.142 "data_offset": 0, 00:29:13.142 "data_size": 65536 00:29:13.142 } 00:29:13.142 ] 00:29:13.142 }' 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=973 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.142 16:46:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.399 16:46:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:13.399 "name": "raid_bdev1", 00:29:13.399 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:13.399 "strip_size_kb": 0, 00:29:13.399 "state": "online", 00:29:13.399 "raid_level": "raid1", 00:29:13.399 "superblock": false, 00:29:13.399 "num_base_bdevs": 4, 00:29:13.399 "num_base_bdevs_discovered": 3, 00:29:13.399 "num_base_bdevs_operational": 3, 00:29:13.399 "process": { 00:29:13.400 "type": "rebuild", 00:29:13.400 "target": "spare", 00:29:13.400 "progress": { 00:29:13.400 "blocks": 43008, 00:29:13.400 "percent": 65 00:29:13.400 } 00:29:13.400 }, 00:29:13.400 "base_bdevs_list": [ 00:29:13.400 { 00:29:13.400 "name": "spare", 00:29:13.400 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:13.400 "is_configured": true, 00:29:13.400 "data_offset": 0, 00:29:13.400 "data_size": 65536 00:29:13.400 }, 00:29:13.400 { 00:29:13.400 "name": null, 00:29:13.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.400 "is_configured": false, 00:29:13.400 "data_offset": 0, 00:29:13.400 "data_size": 65536 00:29:13.400 }, 00:29:13.400 { 00:29:13.400 "name": "BaseBdev3", 00:29:13.400 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:13.400 "is_configured": true, 00:29:13.400 "data_offset": 0, 00:29:13.400 "data_size": 65536 00:29:13.400 }, 00:29:13.400 { 00:29:13.400 "name": "BaseBdev4", 00:29:13.400 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:13.400 "is_configured": true, 00:29:13.400 "data_offset": 0, 00:29:13.400 "data_size": 65536 00:29:13.400 } 00:29:13.400 ] 00:29:13.400 }' 00:29:13.400 16:46:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:13.400 16:46:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:13.400 16:46:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:13.400 16:46:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:13.400 16:46:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:14.770 [2024-07-24 16:46:11.217739] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:14.770 [2024-07-24 16:46:11.217822] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:14.770 [2024-07-24 16:46:11.217876] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:14.770 "name": "raid_bdev1", 00:29:14.770 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:14.770 "strip_size_kb": 0, 00:29:14.770 "state": "online", 00:29:14.770 "raid_level": "raid1", 00:29:14.770 "superblock": false, 00:29:14.770 "num_base_bdevs": 4, 00:29:14.770 "num_base_bdevs_discovered": 3, 00:29:14.770 "num_base_bdevs_operational": 3, 00:29:14.770 "base_bdevs_list": [ 00:29:14.770 { 00:29:14.770 "name": "spare", 00:29:14.770 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:14.770 "is_configured": true, 00:29:14.770 "data_offset": 0, 00:29:14.770 "data_size": 65536 00:29:14.770 }, 00:29:14.770 { 00:29:14.770 "name": null, 00:29:14.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:14.770 "is_configured": false, 00:29:14.770 "data_offset": 0, 00:29:14.770 "data_size": 65536 00:29:14.770 }, 00:29:14.770 { 00:29:14.770 "name": "BaseBdev3", 00:29:14.770 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:14.770 "is_configured": true, 00:29:14.770 "data_offset": 0, 00:29:14.770 "data_size": 65536 00:29:14.770 }, 00:29:14.770 { 00:29:14.770 "name": "BaseBdev4", 00:29:14.770 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:14.770 "is_configured": true, 00:29:14.770 "data_offset": 0, 00:29:14.770 "data_size": 65536 00:29:14.770 } 00:29:14.770 ] 00:29:14.770 }' 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:14.770 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.027 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.028 "name": "raid_bdev1", 00:29:15.028 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:15.028 "strip_size_kb": 0, 00:29:15.028 "state": "online", 00:29:15.028 "raid_level": "raid1", 00:29:15.028 "superblock": false, 00:29:15.028 "num_base_bdevs": 4, 00:29:15.028 "num_base_bdevs_discovered": 3, 00:29:15.028 "num_base_bdevs_operational": 3, 00:29:15.028 "base_bdevs_list": [ 00:29:15.028 { 00:29:15.028 "name": "spare", 00:29:15.028 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:15.028 "is_configured": true, 00:29:15.028 "data_offset": 0, 00:29:15.028 "data_size": 65536 00:29:15.028 }, 00:29:15.028 { 00:29:15.028 "name": null, 00:29:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.028 "is_configured": false, 00:29:15.028 "data_offset": 0, 00:29:15.028 "data_size": 65536 00:29:15.028 }, 00:29:15.028 { 00:29:15.028 "name": "BaseBdev3", 00:29:15.028 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:15.028 "is_configured": true, 00:29:15.028 "data_offset": 0, 00:29:15.028 "data_size": 65536 00:29:15.028 }, 00:29:15.028 { 00:29:15.028 "name": "BaseBdev4", 00:29:15.028 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:15.028 "is_configured": true, 00:29:15.028 "data_offset": 0, 00:29:15.028 "data_size": 65536 00:29:15.028 } 00:29:15.028 ] 00:29:15.028 }' 00:29:15.028 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.028 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:15.028 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.285 16:46:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.285 16:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:15.285 "name": "raid_bdev1", 00:29:15.285 "uuid": "4affd72c-16ad-4c3b-806a-12bed850800b", 00:29:15.285 "strip_size_kb": 0, 00:29:15.285 "state": "online", 00:29:15.285 "raid_level": "raid1", 00:29:15.285 "superblock": false, 00:29:15.285 "num_base_bdevs": 4, 00:29:15.285 "num_base_bdevs_discovered": 3, 00:29:15.285 "num_base_bdevs_operational": 3, 00:29:15.285 "base_bdevs_list": [ 00:29:15.285 { 00:29:15.285 "name": "spare", 00:29:15.285 "uuid": "b89b8c0a-1dbd-58da-a52f-aca36f2981b1", 00:29:15.285 "is_configured": true, 00:29:15.285 "data_offset": 0, 00:29:15.285 "data_size": 65536 00:29:15.285 }, 00:29:15.285 { 00:29:15.285 "name": null, 00:29:15.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:15.285 "is_configured": false, 00:29:15.285 "data_offset": 0, 00:29:15.285 "data_size": 65536 00:29:15.285 }, 00:29:15.285 { 00:29:15.285 "name": "BaseBdev3", 00:29:15.285 "uuid": "326cee92-389b-5e23-982c-3473a9fb4430", 00:29:15.285 "is_configured": true, 00:29:15.285 "data_offset": 0, 00:29:15.285 "data_size": 65536 00:29:15.285 }, 00:29:15.285 { 00:29:15.285 "name": "BaseBdev4", 00:29:15.285 "uuid": "8101607b-cec1-573e-9f9f-2c9ce2d0d895", 00:29:15.285 "is_configured": true, 00:29:15.285 "data_offset": 0, 00:29:15.285 "data_size": 65536 00:29:15.285 } 00:29:15.285 ] 00:29:15.285 }' 00:29:15.285 16:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:15.285 16:46:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.850 16:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:16.107 [2024-07-24 16:46:12.887134] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:16.107 [2024-07-24 16:46:12.887174] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:16.107 [2024-07-24 16:46:12.887259] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:16.107 [2024-07-24 16:46:12.887356] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:16.107 [2024-07-24 16:46:12.887373] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:29:16.107 16:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.107 16:46:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:29:16.363 16:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.364 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:16.620 /dev/nbd0 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:16.620 1+0 records in 00:29:16.620 1+0 records out 00:29:16.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268224 s, 15.3 MB/s 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:16.620 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.621 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.621 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:16.877 /dev/nbd1 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:16.877 1+0 records in 00:29:16.877 1+0 records out 00:29:16.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253789 s, 16.1 MB/s 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:16.877 16:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:17.136 16:46:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:17.395 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:17.654 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:17.654 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 1760601 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 1760601 ']' 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 1760601 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1760601 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1760601' 00:29:17.655 killing process with pid 1760601 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 1760601 00:29:17.655 Received shutdown signal, test time was about 60.000000 seconds 00:29:17.655 00:29:17.655 Latency(us) 00:29:17.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.655 =================================================================================================================== 00:29:17.655 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.655 [2024-07-24 16:46:14.451168] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:17.655 16:46:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 1760601 00:29:18.222 [2024-07-24 16:46:15.040700] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:29:20.127 00:29:20.127 real 0m26.192s 00:29:20.127 user 0m34.358s 00:29:20.127 sys 0m4.800s 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.127 ************************************ 00:29:20.127 END TEST raid_rebuild_test 00:29:20.127 ************************************ 00:29:20.127 16:46:16 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:20.127 16:46:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:20.127 16:46:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:20.127 16:46:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:20.127 ************************************ 00:29:20.127 START TEST raid_rebuild_test_sb 00:29:20.127 ************************************ 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=1765293 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 1765293 /var/tmp/spdk-raid.sock 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 1765293 ']' 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:20.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.127 16:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:20.127 [2024-07-24 16:46:16.949190] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:29:20.127 [2024-07-24 16:46:16.949315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765293 ] 00:29:20.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:20.127 Zero copy mechanism will not be used. 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.387 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:20.387 [2024-07-24 16:46:17.176872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.646 [2024-07-24 16:46:17.460172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.213 [2024-07-24 16:46:17.794868] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:21.214 [2024-07-24 16:46:17.794917] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:21.214 16:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.214 16:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:21.214 16:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:21.214 16:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:21.471 BaseBdev1_malloc 00:29:21.471 16:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:21.729 [2024-07-24 16:46:18.482393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:21.729 [2024-07-24 16:46:18.482460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:21.729 [2024-07-24 16:46:18.482495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:29:21.729 [2024-07-24 16:46:18.482515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:21.729 [2024-07-24 16:46:18.485285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:21.729 [2024-07-24 16:46:18.485328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:21.729 BaseBdev1 00:29:21.729 16:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:21.729 16:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:21.988 BaseBdev2_malloc 00:29:21.988 16:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:22.247 [2024-07-24 16:46:18.989586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:22.247 [2024-07-24 16:46:18.989649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.247 [2024-07-24 16:46:18.989676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:29:22.247 [2024-07-24 16:46:18.989697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.247 [2024-07-24 16:46:18.992448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.247 [2024-07-24 16:46:18.992486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:22.247 BaseBdev2 00:29:22.247 16:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:22.247 16:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:22.506 BaseBdev3_malloc 00:29:22.506 16:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:22.766 [2024-07-24 16:46:19.496094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:22.766 [2024-07-24 16:46:19.496172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:22.766 [2024-07-24 16:46:19.496205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:29:22.766 [2024-07-24 16:46:19.496224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:22.766 [2024-07-24 16:46:19.498973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:22.766 [2024-07-24 16:46:19.499012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:22.766 BaseBdev3 00:29:22.766 16:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:22.766 16:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:23.025 BaseBdev4_malloc 00:29:23.025 16:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:23.284 [2024-07-24 16:46:20.008746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:23.284 [2024-07-24 16:46:20.008820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:23.284 [2024-07-24 16:46:20.008847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:29:23.284 [2024-07-24 16:46:20.008866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:23.284 [2024-07-24 16:46:20.011631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:23.284 [2024-07-24 16:46:20.011668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:23.284 BaseBdev4 00:29:23.284 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:23.543 spare_malloc 00:29:23.543 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:23.801 spare_delay 00:29:23.801 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:24.060 [2024-07-24 16:46:20.739188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:24.060 [2024-07-24 16:46:20.739253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:24.060 [2024-07-24 16:46:20.739282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:29:24.060 [2024-07-24 16:46:20.739301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:24.060 [2024-07-24 16:46:20.742099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:24.060 [2024-07-24 16:46:20.742146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:24.060 spare 00:29:24.060 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:24.319 [2024-07-24 16:46:20.963828] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:24.319 [2024-07-24 16:46:20.966160] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:24.319 [2024-07-24 16:46:20.966233] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:24.319 [2024-07-24 16:46:20.966300] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:24.319 [2024-07-24 16:46:20.966558] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:29:24.319 [2024-07-24 16:46:20.966584] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:24.319 [2024-07-24 16:46:20.966969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:29:24.319 [2024-07-24 16:46:20.967237] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:29:24.319 [2024-07-24 16:46:20.967253] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:29:24.319 [2024-07-24 16:46:20.967452] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.319 16:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.578 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:24.578 "name": "raid_bdev1", 00:29:24.578 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:24.578 "strip_size_kb": 0, 00:29:24.578 "state": "online", 00:29:24.578 "raid_level": "raid1", 00:29:24.578 "superblock": true, 00:29:24.578 "num_base_bdevs": 4, 00:29:24.578 "num_base_bdevs_discovered": 4, 00:29:24.578 "num_base_bdevs_operational": 4, 00:29:24.578 "base_bdevs_list": [ 00:29:24.578 { 00:29:24.578 "name": "BaseBdev1", 00:29:24.578 "uuid": "eee4f4b5-b8d1-5edb-a3da-1010dd1e7086", 00:29:24.578 "is_configured": true, 00:29:24.578 "data_offset": 2048, 00:29:24.578 "data_size": 63488 00:29:24.578 }, 00:29:24.578 { 00:29:24.578 "name": "BaseBdev2", 00:29:24.578 "uuid": "e5847ba1-bc88-59ea-81ca-f7611f8812e3", 00:29:24.578 "is_configured": true, 00:29:24.578 "data_offset": 2048, 00:29:24.578 "data_size": 63488 00:29:24.578 }, 00:29:24.578 { 00:29:24.578 "name": "BaseBdev3", 00:29:24.578 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:24.578 "is_configured": true, 00:29:24.578 "data_offset": 2048, 00:29:24.578 "data_size": 63488 00:29:24.578 }, 00:29:24.578 { 00:29:24.578 "name": "BaseBdev4", 00:29:24.578 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:24.578 "is_configured": true, 00:29:24.578 "data_offset": 2048, 00:29:24.578 "data_size": 63488 00:29:24.578 } 00:29:24.578 ] 00:29:24.578 }' 00:29:24.578 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:24.578 16:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:25.146 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:25.146 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:25.146 [2024-07-24 16:46:21.930962] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:25.146 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:29:25.146 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.146 16:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:25.405 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:25.664 [2024-07-24 16:46:22.391865] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:29:25.664 /dev/nbd0 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:25.664 1+0 records in 00:29:25.664 1+0 records out 00:29:25.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260597 s, 15.7 MB/s 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:25.664 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:25.665 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:25.665 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:25.665 16:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:29:32.256 63488+0 records in 00:29:32.256 63488+0 records out 00:29:32.256 32505856 bytes (33 MB, 31 MiB) copied, 6.24799 s, 5.2 MB/s 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:32.256 [2024-07-24 16:46:28.946322] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:32.256 16:46:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:32.515 [2024-07-24 16:46:29.158590] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.515 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.774 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:32.774 "name": "raid_bdev1", 00:29:32.774 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:32.774 "strip_size_kb": 0, 00:29:32.774 "state": "online", 00:29:32.774 "raid_level": "raid1", 00:29:32.774 "superblock": true, 00:29:32.774 "num_base_bdevs": 4, 00:29:32.774 "num_base_bdevs_discovered": 3, 00:29:32.774 "num_base_bdevs_operational": 3, 00:29:32.774 "base_bdevs_list": [ 00:29:32.774 { 00:29:32.774 "name": null, 00:29:32.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.774 "is_configured": false, 00:29:32.774 "data_offset": 2048, 00:29:32.774 "data_size": 63488 00:29:32.774 }, 00:29:32.774 { 00:29:32.774 "name": "BaseBdev2", 00:29:32.774 "uuid": "e5847ba1-bc88-59ea-81ca-f7611f8812e3", 00:29:32.774 "is_configured": true, 00:29:32.774 "data_offset": 2048, 00:29:32.774 "data_size": 63488 00:29:32.774 }, 00:29:32.774 { 00:29:32.774 "name": "BaseBdev3", 00:29:32.774 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:32.774 "is_configured": true, 00:29:32.774 "data_offset": 2048, 00:29:32.774 "data_size": 63488 00:29:32.774 }, 00:29:32.774 { 00:29:32.774 "name": "BaseBdev4", 00:29:32.774 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:32.774 "is_configured": true, 00:29:32.774 "data_offset": 2048, 00:29:32.774 "data_size": 63488 00:29:32.774 } 00:29:32.774 ] 00:29:32.774 }' 00:29:32.774 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:32.774 16:46:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.340 16:46:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:33.597 [2024-07-24 16:46:30.205449] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:33.597 [2024-07-24 16:46:30.227821] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caad40 00:29:33.597 [2024-07-24 16:46:30.230198] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:33.597 16:46:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.531 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.789 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.789 "name": "raid_bdev1", 00:29:34.789 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:34.789 "strip_size_kb": 0, 00:29:34.789 "state": "online", 00:29:34.789 "raid_level": "raid1", 00:29:34.789 "superblock": true, 00:29:34.789 "num_base_bdevs": 4, 00:29:34.789 "num_base_bdevs_discovered": 4, 00:29:34.789 "num_base_bdevs_operational": 4, 00:29:34.789 "process": { 00:29:34.789 "type": "rebuild", 00:29:34.789 "target": "spare", 00:29:34.789 "progress": { 00:29:34.789 "blocks": 22528, 00:29:34.789 "percent": 35 00:29:34.789 } 00:29:34.789 }, 00:29:34.789 "base_bdevs_list": [ 00:29:34.789 { 00:29:34.789 "name": "spare", 00:29:34.789 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:34.789 "is_configured": true, 00:29:34.789 "data_offset": 2048, 00:29:34.789 "data_size": 63488 00:29:34.789 }, 00:29:34.789 { 00:29:34.789 "name": "BaseBdev2", 00:29:34.789 "uuid": "e5847ba1-bc88-59ea-81ca-f7611f8812e3", 00:29:34.789 "is_configured": true, 00:29:34.789 "data_offset": 2048, 00:29:34.789 "data_size": 63488 00:29:34.789 }, 00:29:34.789 { 00:29:34.789 "name": "BaseBdev3", 00:29:34.789 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:34.789 "is_configured": true, 00:29:34.789 "data_offset": 2048, 00:29:34.789 "data_size": 63488 00:29:34.789 }, 00:29:34.789 { 00:29:34.789 "name": "BaseBdev4", 00:29:34.789 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:34.789 "is_configured": true, 00:29:34.789 "data_offset": 2048, 00:29:34.789 "data_size": 63488 00:29:34.789 } 00:29:34.789 ] 00:29:34.789 }' 00:29:34.789 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.789 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:34.789 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:34.789 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:34.789 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:35.047 [2024-07-24 16:46:31.727445] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:35.047 [2024-07-24 16:46:31.742410] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:35.047 [2024-07-24 16:46:31.742477] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:35.047 [2024-07-24 16:46:31.742500] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:35.047 [2024-07-24 16:46:31.742515] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.047 16:46:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:35.305 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:35.305 "name": "raid_bdev1", 00:29:35.305 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:35.305 "strip_size_kb": 0, 00:29:35.305 "state": "online", 00:29:35.305 "raid_level": "raid1", 00:29:35.305 "superblock": true, 00:29:35.305 "num_base_bdevs": 4, 00:29:35.305 "num_base_bdevs_discovered": 3, 00:29:35.305 "num_base_bdevs_operational": 3, 00:29:35.305 "base_bdevs_list": [ 00:29:35.305 { 00:29:35.305 "name": null, 00:29:35.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:35.305 "is_configured": false, 00:29:35.305 "data_offset": 2048, 00:29:35.305 "data_size": 63488 00:29:35.305 }, 00:29:35.305 { 00:29:35.305 "name": "BaseBdev2", 00:29:35.305 "uuid": "e5847ba1-bc88-59ea-81ca-f7611f8812e3", 00:29:35.305 "is_configured": true, 00:29:35.305 "data_offset": 2048, 00:29:35.305 "data_size": 63488 00:29:35.305 }, 00:29:35.305 { 00:29:35.305 "name": "BaseBdev3", 00:29:35.305 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:35.305 "is_configured": true, 00:29:35.305 "data_offset": 2048, 00:29:35.305 "data_size": 63488 00:29:35.305 }, 00:29:35.305 { 00:29:35.305 "name": "BaseBdev4", 00:29:35.305 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:35.305 "is_configured": true, 00:29:35.305 "data_offset": 2048, 00:29:35.305 "data_size": 63488 00:29:35.305 } 00:29:35.305 ] 00:29:35.305 }' 00:29:35.305 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:35.305 16:46:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.872 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.130 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.130 "name": "raid_bdev1", 00:29:36.130 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:36.130 "strip_size_kb": 0, 00:29:36.130 "state": "online", 00:29:36.130 "raid_level": "raid1", 00:29:36.130 "superblock": true, 00:29:36.130 "num_base_bdevs": 4, 00:29:36.130 "num_base_bdevs_discovered": 3, 00:29:36.130 "num_base_bdevs_operational": 3, 00:29:36.130 "base_bdevs_list": [ 00:29:36.130 { 00:29:36.130 "name": null, 00:29:36.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:36.130 "is_configured": false, 00:29:36.130 "data_offset": 2048, 00:29:36.130 "data_size": 63488 00:29:36.130 }, 00:29:36.130 { 00:29:36.130 "name": "BaseBdev2", 00:29:36.130 "uuid": "e5847ba1-bc88-59ea-81ca-f7611f8812e3", 00:29:36.130 "is_configured": true, 00:29:36.131 "data_offset": 2048, 00:29:36.131 "data_size": 63488 00:29:36.131 }, 00:29:36.131 { 00:29:36.131 "name": "BaseBdev3", 00:29:36.131 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:36.131 "is_configured": true, 00:29:36.131 "data_offset": 2048, 00:29:36.131 "data_size": 63488 00:29:36.131 }, 00:29:36.131 { 00:29:36.131 "name": "BaseBdev4", 00:29:36.131 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:36.131 "is_configured": true, 00:29:36.131 "data_offset": 2048, 00:29:36.131 "data_size": 63488 00:29:36.131 } 00:29:36.131 ] 00:29:36.131 }' 00:29:36.131 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:36.131 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:36.131 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:36.131 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:36.131 16:46:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:36.389 [2024-07-24 16:46:33.077762] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:36.389 [2024-07-24 16:46:33.098920] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caae10 00:29:36.389 [2024-07-24 16:46:33.101277] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:36.389 16:46:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.322 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:37.580 "name": "raid_bdev1", 00:29:37.580 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:37.580 "strip_size_kb": 0, 00:29:37.580 "state": "online", 00:29:37.580 "raid_level": "raid1", 00:29:37.580 "superblock": true, 00:29:37.580 "num_base_bdevs": 4, 00:29:37.580 "num_base_bdevs_discovered": 4, 00:29:37.580 "num_base_bdevs_operational": 4, 00:29:37.580 "process": { 00:29:37.580 "type": "rebuild", 00:29:37.580 "target": "spare", 00:29:37.580 "progress": { 00:29:37.580 "blocks": 24576, 00:29:37.580 "percent": 38 00:29:37.580 } 00:29:37.580 }, 00:29:37.580 "base_bdevs_list": [ 00:29:37.580 { 00:29:37.580 "name": "spare", 00:29:37.580 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:37.580 "is_configured": true, 00:29:37.580 "data_offset": 2048, 00:29:37.580 "data_size": 63488 00:29:37.580 }, 00:29:37.580 { 00:29:37.580 "name": "BaseBdev2", 00:29:37.580 "uuid": "e5847ba1-bc88-59ea-81ca-f7611f8812e3", 00:29:37.580 "is_configured": true, 00:29:37.580 "data_offset": 2048, 00:29:37.580 "data_size": 63488 00:29:37.580 }, 00:29:37.580 { 00:29:37.580 "name": "BaseBdev3", 00:29:37.580 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:37.580 "is_configured": true, 00:29:37.580 "data_offset": 2048, 00:29:37.580 "data_size": 63488 00:29:37.580 }, 00:29:37.580 { 00:29:37.580 "name": "BaseBdev4", 00:29:37.580 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:37.580 "is_configured": true, 00:29:37.580 "data_offset": 2048, 00:29:37.580 "data_size": 63488 00:29:37.580 } 00:29:37.580 ] 00:29:37.580 }' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:29:37.580 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:29:37.580 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:37.838 [2024-07-24 16:46:34.639126] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:38.096 [2024-07-24 16:46:34.814528] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000caae10 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.096 16:46:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.353 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:38.353 "name": "raid_bdev1", 00:29:38.353 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:38.353 "strip_size_kb": 0, 00:29:38.353 "state": "online", 00:29:38.353 "raid_level": "raid1", 00:29:38.353 "superblock": true, 00:29:38.353 "num_base_bdevs": 4, 00:29:38.353 "num_base_bdevs_discovered": 3, 00:29:38.353 "num_base_bdevs_operational": 3, 00:29:38.353 "process": { 00:29:38.353 "type": "rebuild", 00:29:38.353 "target": "spare", 00:29:38.353 "progress": { 00:29:38.353 "blocks": 36864, 00:29:38.353 "percent": 58 00:29:38.353 } 00:29:38.353 }, 00:29:38.353 "base_bdevs_list": [ 00:29:38.353 { 00:29:38.353 "name": "spare", 00:29:38.353 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:38.353 "is_configured": true, 00:29:38.353 "data_offset": 2048, 00:29:38.353 "data_size": 63488 00:29:38.353 }, 00:29:38.353 { 00:29:38.353 "name": null, 00:29:38.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.353 "is_configured": false, 00:29:38.353 "data_offset": 2048, 00:29:38.353 "data_size": 63488 00:29:38.353 }, 00:29:38.353 { 00:29:38.353 "name": "BaseBdev3", 00:29:38.353 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:38.353 "is_configured": true, 00:29:38.353 "data_offset": 2048, 00:29:38.353 "data_size": 63488 00:29:38.353 }, 00:29:38.353 { 00:29:38.353 "name": "BaseBdev4", 00:29:38.354 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:38.354 "is_configured": true, 00:29:38.354 "data_offset": 2048, 00:29:38.354 "data_size": 63488 00:29:38.354 } 00:29:38.354 ] 00:29:38.354 }' 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=999 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.354 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.612 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:38.612 "name": "raid_bdev1", 00:29:38.612 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:38.612 "strip_size_kb": 0, 00:29:38.612 "state": "online", 00:29:38.612 "raid_level": "raid1", 00:29:38.612 "superblock": true, 00:29:38.612 "num_base_bdevs": 4, 00:29:38.612 "num_base_bdevs_discovered": 3, 00:29:38.612 "num_base_bdevs_operational": 3, 00:29:38.612 "process": { 00:29:38.612 "type": "rebuild", 00:29:38.612 "target": "spare", 00:29:38.612 "progress": { 00:29:38.612 "blocks": 43008, 00:29:38.612 "percent": 67 00:29:38.612 } 00:29:38.612 }, 00:29:38.612 "base_bdevs_list": [ 00:29:38.612 { 00:29:38.612 "name": "spare", 00:29:38.612 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:38.612 "is_configured": true, 00:29:38.612 "data_offset": 2048, 00:29:38.612 "data_size": 63488 00:29:38.612 }, 00:29:38.612 { 00:29:38.612 "name": null, 00:29:38.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.612 "is_configured": false, 00:29:38.612 "data_offset": 2048, 00:29:38.612 "data_size": 63488 00:29:38.612 }, 00:29:38.612 { 00:29:38.612 "name": "BaseBdev3", 00:29:38.612 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:38.612 "is_configured": true, 00:29:38.612 "data_offset": 2048, 00:29:38.612 "data_size": 63488 00:29:38.612 }, 00:29:38.612 { 00:29:38.612 "name": "BaseBdev4", 00:29:38.612 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:38.612 "is_configured": true, 00:29:38.612 "data_offset": 2048, 00:29:38.612 "data_size": 63488 00:29:38.612 } 00:29:38.612 ] 00:29:38.612 }' 00:29:38.612 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:38.612 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:38.612 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:38.612 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:38.612 16:46:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:39.547 [2024-07-24 16:46:36.326724] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:39.547 [2024-07-24 16:46:36.326800] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:39.547 [2024-07-24 16:46:36.326912] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.805 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:39.805 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.805 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.805 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:39.805 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:39.805 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.806 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.806 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.064 "name": "raid_bdev1", 00:29:40.064 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:40.064 "strip_size_kb": 0, 00:29:40.064 "state": "online", 00:29:40.064 "raid_level": "raid1", 00:29:40.064 "superblock": true, 00:29:40.064 "num_base_bdevs": 4, 00:29:40.064 "num_base_bdevs_discovered": 3, 00:29:40.064 "num_base_bdevs_operational": 3, 00:29:40.064 "base_bdevs_list": [ 00:29:40.064 { 00:29:40.064 "name": "spare", 00:29:40.064 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:40.064 "is_configured": true, 00:29:40.064 "data_offset": 2048, 00:29:40.064 "data_size": 63488 00:29:40.064 }, 00:29:40.064 { 00:29:40.064 "name": null, 00:29:40.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.064 "is_configured": false, 00:29:40.064 "data_offset": 2048, 00:29:40.064 "data_size": 63488 00:29:40.064 }, 00:29:40.064 { 00:29:40.064 "name": "BaseBdev3", 00:29:40.064 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:40.064 "is_configured": true, 00:29:40.064 "data_offset": 2048, 00:29:40.064 "data_size": 63488 00:29:40.064 }, 00:29:40.064 { 00:29:40.064 "name": "BaseBdev4", 00:29:40.064 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:40.064 "is_configured": true, 00:29:40.064 "data_offset": 2048, 00:29:40.064 "data_size": 63488 00:29:40.064 } 00:29:40.064 ] 00:29:40.064 }' 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.064 16:46:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.323 "name": "raid_bdev1", 00:29:40.323 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:40.323 "strip_size_kb": 0, 00:29:40.323 "state": "online", 00:29:40.323 "raid_level": "raid1", 00:29:40.323 "superblock": true, 00:29:40.323 "num_base_bdevs": 4, 00:29:40.323 "num_base_bdevs_discovered": 3, 00:29:40.323 "num_base_bdevs_operational": 3, 00:29:40.323 "base_bdevs_list": [ 00:29:40.323 { 00:29:40.323 "name": "spare", 00:29:40.323 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:40.323 "is_configured": true, 00:29:40.323 "data_offset": 2048, 00:29:40.323 "data_size": 63488 00:29:40.323 }, 00:29:40.323 { 00:29:40.323 "name": null, 00:29:40.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.323 "is_configured": false, 00:29:40.323 "data_offset": 2048, 00:29:40.323 "data_size": 63488 00:29:40.323 }, 00:29:40.323 { 00:29:40.323 "name": "BaseBdev3", 00:29:40.323 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:40.323 "is_configured": true, 00:29:40.323 "data_offset": 2048, 00:29:40.323 "data_size": 63488 00:29:40.323 }, 00:29:40.323 { 00:29:40.323 "name": "BaseBdev4", 00:29:40.323 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:40.323 "is_configured": true, 00:29:40.323 "data_offset": 2048, 00:29:40.323 "data_size": 63488 00:29:40.323 } 00:29:40.323 ] 00:29:40.323 }' 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.323 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.581 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.581 "name": "raid_bdev1", 00:29:40.581 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:40.581 "strip_size_kb": 0, 00:29:40.581 "state": "online", 00:29:40.581 "raid_level": "raid1", 00:29:40.581 "superblock": true, 00:29:40.581 "num_base_bdevs": 4, 00:29:40.581 "num_base_bdevs_discovered": 3, 00:29:40.581 "num_base_bdevs_operational": 3, 00:29:40.581 "base_bdevs_list": [ 00:29:40.581 { 00:29:40.581 "name": "spare", 00:29:40.581 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:40.581 "is_configured": true, 00:29:40.581 "data_offset": 2048, 00:29:40.581 "data_size": 63488 00:29:40.581 }, 00:29:40.581 { 00:29:40.581 "name": null, 00:29:40.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.581 "is_configured": false, 00:29:40.581 "data_offset": 2048, 00:29:40.581 "data_size": 63488 00:29:40.581 }, 00:29:40.581 { 00:29:40.581 "name": "BaseBdev3", 00:29:40.581 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:40.581 "is_configured": true, 00:29:40.581 "data_offset": 2048, 00:29:40.581 "data_size": 63488 00:29:40.581 }, 00:29:40.581 { 00:29:40.581 "name": "BaseBdev4", 00:29:40.581 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:40.581 "is_configured": true, 00:29:40.581 "data_offset": 2048, 00:29:40.581 "data_size": 63488 00:29:40.581 } 00:29:40.581 ] 00:29:40.581 }' 00:29:40.581 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.581 16:46:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:41.148 16:46:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:41.408 [2024-07-24 16:46:38.099980] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:41.408 [2024-07-24 16:46:38.100015] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:41.408 [2024-07-24 16:46:38.100101] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:41.408 [2024-07-24 16:46:38.100201] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:41.408 [2024-07-24 16:46:38.100219] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:29:41.408 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.408 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:41.667 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:41.926 /dev/nbd0 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:41.926 1+0 records in 00:29:41.926 1+0 records out 00:29:41.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243789 s, 16.8 MB/s 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:41.926 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:42.185 /dev/nbd1 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:42.185 1+0 records in 00:29:42.185 1+0 records out 00:29:42.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334706 s, 12.2 MB/s 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:42.185 16:46:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:42.443 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:42.702 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:29:42.961 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:43.219 16:46:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:43.219 [2024-07-24 16:46:40.040747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:43.219 [2024-07-24 16:46:40.040813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:43.219 [2024-07-24 16:46:40.040844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:29:43.219 [2024-07-24 16:46:40.040862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:43.219 [2024-07-24 16:46:40.043755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:43.219 [2024-07-24 16:46:40.043793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:43.219 [2024-07-24 16:46:40.043899] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:43.219 [2024-07-24 16:46:40.043975] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:43.219 [2024-07-24 16:46:40.044205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:43.219 [2024-07-24 16:46:40.044317] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:43.219 spare 00:29:43.219 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:43.219 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:43.219 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:43.219 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:43.219 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.220 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.504 [2024-07-24 16:46:40.144660] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045080 00:29:43.504 [2024-07-24 16:46:40.144692] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:43.504 [2024-07-24 16:46:40.145078] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:29:43.504 [2024-07-24 16:46:40.145364] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045080 00:29:43.504 [2024-07-24 16:46:40.145384] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045080 00:29:43.504 [2024-07-24 16:46:40.145602] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:43.504 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:43.504 "name": "raid_bdev1", 00:29:43.504 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:43.504 "strip_size_kb": 0, 00:29:43.504 "state": "online", 00:29:43.504 "raid_level": "raid1", 00:29:43.504 "superblock": true, 00:29:43.504 "num_base_bdevs": 4, 00:29:43.504 "num_base_bdevs_discovered": 3, 00:29:43.504 "num_base_bdevs_operational": 3, 00:29:43.504 "base_bdevs_list": [ 00:29:43.504 { 00:29:43.504 "name": "spare", 00:29:43.504 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:43.504 "is_configured": true, 00:29:43.504 "data_offset": 2048, 00:29:43.504 "data_size": 63488 00:29:43.504 }, 00:29:43.504 { 00:29:43.504 "name": null, 00:29:43.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:43.504 "is_configured": false, 00:29:43.504 "data_offset": 2048, 00:29:43.504 "data_size": 63488 00:29:43.504 }, 00:29:43.504 { 00:29:43.504 "name": "BaseBdev3", 00:29:43.504 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:43.504 "is_configured": true, 00:29:43.504 "data_offset": 2048, 00:29:43.504 "data_size": 63488 00:29:43.504 }, 00:29:43.504 { 00:29:43.504 "name": "BaseBdev4", 00:29:43.504 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:43.504 "is_configured": true, 00:29:43.504 "data_offset": 2048, 00:29:43.504 "data_size": 63488 00:29:43.504 } 00:29:43.504 ] 00:29:43.504 }' 00:29:43.504 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:43.504 16:46:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:44.071 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:44.071 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.072 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:44.072 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:44.072 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.072 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.072 16:46:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.331 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.331 "name": "raid_bdev1", 00:29:44.331 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:44.331 "strip_size_kb": 0, 00:29:44.331 "state": "online", 00:29:44.331 "raid_level": "raid1", 00:29:44.331 "superblock": true, 00:29:44.331 "num_base_bdevs": 4, 00:29:44.331 "num_base_bdevs_discovered": 3, 00:29:44.331 "num_base_bdevs_operational": 3, 00:29:44.331 "base_bdevs_list": [ 00:29:44.331 { 00:29:44.331 "name": "spare", 00:29:44.331 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:44.331 "is_configured": true, 00:29:44.331 "data_offset": 2048, 00:29:44.331 "data_size": 63488 00:29:44.331 }, 00:29:44.331 { 00:29:44.331 "name": null, 00:29:44.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.331 "is_configured": false, 00:29:44.331 "data_offset": 2048, 00:29:44.331 "data_size": 63488 00:29:44.331 }, 00:29:44.331 { 00:29:44.331 "name": "BaseBdev3", 00:29:44.331 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:44.331 "is_configured": true, 00:29:44.331 "data_offset": 2048, 00:29:44.331 "data_size": 63488 00:29:44.331 }, 00:29:44.331 { 00:29:44.331 "name": "BaseBdev4", 00:29:44.331 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:44.331 "is_configured": true, 00:29:44.331 "data_offset": 2048, 00:29:44.331 "data_size": 63488 00:29:44.331 } 00:29:44.331 ] 00:29:44.331 }' 00:29:44.331 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.331 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:44.331 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.589 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:44.589 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.589 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:44.589 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.589 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:44.848 [2024-07-24 16:46:41.637707] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.848 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.107 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:45.107 "name": "raid_bdev1", 00:29:45.107 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:45.107 "strip_size_kb": 0, 00:29:45.107 "state": "online", 00:29:45.107 "raid_level": "raid1", 00:29:45.107 "superblock": true, 00:29:45.107 "num_base_bdevs": 4, 00:29:45.107 "num_base_bdevs_discovered": 2, 00:29:45.107 "num_base_bdevs_operational": 2, 00:29:45.107 "base_bdevs_list": [ 00:29:45.107 { 00:29:45.107 "name": null, 00:29:45.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.107 "is_configured": false, 00:29:45.107 "data_offset": 2048, 00:29:45.107 "data_size": 63488 00:29:45.107 }, 00:29:45.107 { 00:29:45.107 "name": null, 00:29:45.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.107 "is_configured": false, 00:29:45.107 "data_offset": 2048, 00:29:45.107 "data_size": 63488 00:29:45.107 }, 00:29:45.107 { 00:29:45.107 "name": "BaseBdev3", 00:29:45.107 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:45.107 "is_configured": true, 00:29:45.107 "data_offset": 2048, 00:29:45.107 "data_size": 63488 00:29:45.107 }, 00:29:45.107 { 00:29:45.107 "name": "BaseBdev4", 00:29:45.107 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:45.107 "is_configured": true, 00:29:45.107 "data_offset": 2048, 00:29:45.107 "data_size": 63488 00:29:45.107 } 00:29:45.107 ] 00:29:45.107 }' 00:29:45.107 16:46:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:45.107 16:46:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:45.675 16:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:45.934 [2024-07-24 16:46:42.680531] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.934 [2024-07-24 16:46:42.680737] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:45.934 [2024-07-24 16:46:42.680758] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:45.934 [2024-07-24 16:46:42.680797] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:45.934 [2024-07-24 16:46:42.702565] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9590 00:29:45.934 [2024-07-24 16:46:42.704895] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:45.934 16:46:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.870 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.129 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.129 "name": "raid_bdev1", 00:29:47.129 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:47.129 "strip_size_kb": 0, 00:29:47.129 "state": "online", 00:29:47.129 "raid_level": "raid1", 00:29:47.129 "superblock": true, 00:29:47.129 "num_base_bdevs": 4, 00:29:47.129 "num_base_bdevs_discovered": 3, 00:29:47.129 "num_base_bdevs_operational": 3, 00:29:47.129 "process": { 00:29:47.129 "type": "rebuild", 00:29:47.129 "target": "spare", 00:29:47.129 "progress": { 00:29:47.129 "blocks": 24576, 00:29:47.129 "percent": 38 00:29:47.129 } 00:29:47.129 }, 00:29:47.129 "base_bdevs_list": [ 00:29:47.129 { 00:29:47.129 "name": "spare", 00:29:47.129 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:47.129 "is_configured": true, 00:29:47.129 "data_offset": 2048, 00:29:47.129 "data_size": 63488 00:29:47.129 }, 00:29:47.129 { 00:29:47.129 "name": null, 00:29:47.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.129 "is_configured": false, 00:29:47.129 "data_offset": 2048, 00:29:47.129 "data_size": 63488 00:29:47.129 }, 00:29:47.129 { 00:29:47.129 "name": "BaseBdev3", 00:29:47.129 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:47.129 "is_configured": true, 00:29:47.129 "data_offset": 2048, 00:29:47.129 "data_size": 63488 00:29:47.129 }, 00:29:47.129 { 00:29:47.129 "name": "BaseBdev4", 00:29:47.129 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:47.129 "is_configured": true, 00:29:47.129 "data_offset": 2048, 00:29:47.129 "data_size": 63488 00:29:47.129 } 00:29:47.129 ] 00:29:47.129 }' 00:29:47.129 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.388 16:46:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:47.388 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.388 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:47.388 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:47.646 [2024-07-24 16:46:44.254333] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:47.646 [2024-07-24 16:46:44.317885] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:47.646 [2024-07-24 16:46:44.317943] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.646 [2024-07-24 16:46:44.317968] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:47.646 [2024-07-24 16:46:44.317980] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:47.646 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.647 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.647 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.647 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.647 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.647 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.905 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:47.905 "name": "raid_bdev1", 00:29:47.905 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:47.905 "strip_size_kb": 0, 00:29:47.905 "state": "online", 00:29:47.905 "raid_level": "raid1", 00:29:47.905 "superblock": true, 00:29:47.905 "num_base_bdevs": 4, 00:29:47.905 "num_base_bdevs_discovered": 2, 00:29:47.905 "num_base_bdevs_operational": 2, 00:29:47.905 "base_bdevs_list": [ 00:29:47.905 { 00:29:47.905 "name": null, 00:29:47.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.905 "is_configured": false, 00:29:47.905 "data_offset": 2048, 00:29:47.905 "data_size": 63488 00:29:47.905 }, 00:29:47.905 { 00:29:47.905 "name": null, 00:29:47.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.905 "is_configured": false, 00:29:47.905 "data_offset": 2048, 00:29:47.905 "data_size": 63488 00:29:47.905 }, 00:29:47.905 { 00:29:47.905 "name": "BaseBdev3", 00:29:47.905 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:47.905 "is_configured": true, 00:29:47.905 "data_offset": 2048, 00:29:47.905 "data_size": 63488 00:29:47.905 }, 00:29:47.905 { 00:29:47.905 "name": "BaseBdev4", 00:29:47.905 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:47.905 "is_configured": true, 00:29:47.905 "data_offset": 2048, 00:29:47.905 "data_size": 63488 00:29:47.905 } 00:29:47.905 ] 00:29:47.905 }' 00:29:47.905 16:46:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:47.905 16:46:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:48.472 16:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:48.730 [2024-07-24 16:46:45.349658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:48.730 [2024-07-24 16:46:45.349721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:48.730 [2024-07-24 16:46:45.349751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045680 00:29:48.730 [2024-07-24 16:46:45.349767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:48.730 [2024-07-24 16:46:45.350437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:48.730 [2024-07-24 16:46:45.350466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:48.730 [2024-07-24 16:46:45.350578] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:48.730 [2024-07-24 16:46:45.350596] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:48.730 [2024-07-24 16:46:45.350621] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:48.730 [2024-07-24 16:46:45.350651] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:48.730 [2024-07-24 16:46:45.374127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9660 00:29:48.730 spare 00:29:48.730 [2024-07-24 16:46:45.376463] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:48.730 16:46:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.662 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.919 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.919 "name": "raid_bdev1", 00:29:49.919 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:49.919 "strip_size_kb": 0, 00:29:49.919 "state": "online", 00:29:49.919 "raid_level": "raid1", 00:29:49.919 "superblock": true, 00:29:49.919 "num_base_bdevs": 4, 00:29:49.919 "num_base_bdevs_discovered": 3, 00:29:49.919 "num_base_bdevs_operational": 3, 00:29:49.919 "process": { 00:29:49.919 "type": "rebuild", 00:29:49.919 "target": "spare", 00:29:49.919 "progress": { 00:29:49.919 "blocks": 24576, 00:29:49.919 "percent": 38 00:29:49.919 } 00:29:49.919 }, 00:29:49.919 "base_bdevs_list": [ 00:29:49.919 { 00:29:49.919 "name": "spare", 00:29:49.919 "uuid": "f415e3fc-2948-5f1f-8f16-e2e6fcbd0619", 00:29:49.919 "is_configured": true, 00:29:49.920 "data_offset": 2048, 00:29:49.920 "data_size": 63488 00:29:49.920 }, 00:29:49.920 { 00:29:49.920 "name": null, 00:29:49.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.920 "is_configured": false, 00:29:49.920 "data_offset": 2048, 00:29:49.920 "data_size": 63488 00:29:49.920 }, 00:29:49.920 { 00:29:49.920 "name": "BaseBdev3", 00:29:49.920 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:49.920 "is_configured": true, 00:29:49.920 "data_offset": 2048, 00:29:49.920 "data_size": 63488 00:29:49.920 }, 00:29:49.920 { 00:29:49.920 "name": "BaseBdev4", 00:29:49.920 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:49.920 "is_configured": true, 00:29:49.920 "data_offset": 2048, 00:29:49.920 "data_size": 63488 00:29:49.920 } 00:29:49.920 ] 00:29:49.920 }' 00:29:49.920 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.920 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:49.920 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.920 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:49.920 16:46:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:50.177 [2024-07-24 16:46:46.929412] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:50.177 [2024-07-24 16:46:46.989470] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:50.177 [2024-07-24 16:46:46.989532] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.177 [2024-07-24 16:46:46.989554] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:50.177 [2024-07-24 16:46:46.989569] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.177 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.178 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.435 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.435 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:50.435 "name": "raid_bdev1", 00:29:50.435 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:50.435 "strip_size_kb": 0, 00:29:50.435 "state": "online", 00:29:50.435 "raid_level": "raid1", 00:29:50.435 "superblock": true, 00:29:50.435 "num_base_bdevs": 4, 00:29:50.435 "num_base_bdevs_discovered": 2, 00:29:50.435 "num_base_bdevs_operational": 2, 00:29:50.435 "base_bdevs_list": [ 00:29:50.435 { 00:29:50.435 "name": null, 00:29:50.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.435 "is_configured": false, 00:29:50.435 "data_offset": 2048, 00:29:50.435 "data_size": 63488 00:29:50.435 }, 00:29:50.435 { 00:29:50.435 "name": null, 00:29:50.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:50.435 "is_configured": false, 00:29:50.435 "data_offset": 2048, 00:29:50.435 "data_size": 63488 00:29:50.435 }, 00:29:50.435 { 00:29:50.435 "name": "BaseBdev3", 00:29:50.435 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:50.435 "is_configured": true, 00:29:50.435 "data_offset": 2048, 00:29:50.435 "data_size": 63488 00:29:50.435 }, 00:29:50.435 { 00:29:50.435 "name": "BaseBdev4", 00:29:50.435 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:50.435 "is_configured": true, 00:29:50.435 "data_offset": 2048, 00:29:50.435 "data_size": 63488 00:29:50.435 } 00:29:50.435 ] 00:29:50.435 }' 00:29:50.435 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:50.435 16:46:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.001 16:46:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.258 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.258 "name": "raid_bdev1", 00:29:51.258 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:51.258 "strip_size_kb": 0, 00:29:51.258 "state": "online", 00:29:51.258 "raid_level": "raid1", 00:29:51.258 "superblock": true, 00:29:51.258 "num_base_bdevs": 4, 00:29:51.258 "num_base_bdevs_discovered": 2, 00:29:51.258 "num_base_bdevs_operational": 2, 00:29:51.258 "base_bdevs_list": [ 00:29:51.258 { 00:29:51.258 "name": null, 00:29:51.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.258 "is_configured": false, 00:29:51.258 "data_offset": 2048, 00:29:51.258 "data_size": 63488 00:29:51.258 }, 00:29:51.258 { 00:29:51.258 "name": null, 00:29:51.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.258 "is_configured": false, 00:29:51.258 "data_offset": 2048, 00:29:51.258 "data_size": 63488 00:29:51.258 }, 00:29:51.258 { 00:29:51.258 "name": "BaseBdev3", 00:29:51.258 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:51.258 "is_configured": true, 00:29:51.258 "data_offset": 2048, 00:29:51.258 "data_size": 63488 00:29:51.258 }, 00:29:51.258 { 00:29:51.258 "name": "BaseBdev4", 00:29:51.258 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:51.258 "is_configured": true, 00:29:51.258 "data_offset": 2048, 00:29:51.258 "data_size": 63488 00:29:51.258 } 00:29:51.258 ] 00:29:51.258 }' 00:29:51.258 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.516 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:51.516 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.516 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:51.516 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:51.774 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:51.774 [2024-07-24 16:46:48.605248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:51.774 [2024-07-24 16:46:48.605320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.774 [2024-07-24 16:46:48.605347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045c80 00:29:51.774 [2024-07-24 16:46:48.605365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.774 [2024-07-24 16:46:48.605934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.774 [2024-07-24 16:46:48.605964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:51.774 [2024-07-24 16:46:48.606055] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:51.774 [2024-07-24 16:46:48.606077] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:51.774 [2024-07-24 16:46:48.606091] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:51.774 BaseBdev1 00:29:51.774 16:46:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.145 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.145 "name": "raid_bdev1", 00:29:53.145 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:53.145 "strip_size_kb": 0, 00:29:53.145 "state": "online", 00:29:53.145 "raid_level": "raid1", 00:29:53.145 "superblock": true, 00:29:53.145 "num_base_bdevs": 4, 00:29:53.145 "num_base_bdevs_discovered": 2, 00:29:53.145 "num_base_bdevs_operational": 2, 00:29:53.145 "base_bdevs_list": [ 00:29:53.145 { 00:29:53.145 "name": null, 00:29:53.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.145 "is_configured": false, 00:29:53.145 "data_offset": 2048, 00:29:53.145 "data_size": 63488 00:29:53.145 }, 00:29:53.145 { 00:29:53.145 "name": null, 00:29:53.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.145 "is_configured": false, 00:29:53.145 "data_offset": 2048, 00:29:53.145 "data_size": 63488 00:29:53.145 }, 00:29:53.145 { 00:29:53.145 "name": "BaseBdev3", 00:29:53.145 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:53.145 "is_configured": true, 00:29:53.145 "data_offset": 2048, 00:29:53.145 "data_size": 63488 00:29:53.145 }, 00:29:53.145 { 00:29:53.145 "name": "BaseBdev4", 00:29:53.146 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:53.146 "is_configured": true, 00:29:53.146 "data_offset": 2048, 00:29:53.146 "data_size": 63488 00:29:53.146 } 00:29:53.146 ] 00:29:53.146 }' 00:29:53.146 16:46:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.146 16:46:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.715 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.972 "name": "raid_bdev1", 00:29:53.972 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:53.972 "strip_size_kb": 0, 00:29:53.972 "state": "online", 00:29:53.972 "raid_level": "raid1", 00:29:53.972 "superblock": true, 00:29:53.972 "num_base_bdevs": 4, 00:29:53.972 "num_base_bdevs_discovered": 2, 00:29:53.972 "num_base_bdevs_operational": 2, 00:29:53.972 "base_bdevs_list": [ 00:29:53.972 { 00:29:53.972 "name": null, 00:29:53.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.972 "is_configured": false, 00:29:53.972 "data_offset": 2048, 00:29:53.972 "data_size": 63488 00:29:53.972 }, 00:29:53.972 { 00:29:53.972 "name": null, 00:29:53.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.972 "is_configured": false, 00:29:53.972 "data_offset": 2048, 00:29:53.972 "data_size": 63488 00:29:53.972 }, 00:29:53.972 { 00:29:53.972 "name": "BaseBdev3", 00:29:53.972 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:53.972 "is_configured": true, 00:29:53.972 "data_offset": 2048, 00:29:53.972 "data_size": 63488 00:29:53.972 }, 00:29:53.972 { 00:29:53.972 "name": "BaseBdev4", 00:29:53.972 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:53.972 "is_configured": true, 00:29:53.972 "data_offset": 2048, 00:29:53.972 "data_size": 63488 00:29:53.972 } 00:29:53.972 ] 00:29:53.972 }' 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:53.972 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:54.230 [2024-07-24 16:46:50.931572] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:54.230 [2024-07-24 16:46:50.931738] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:54.230 [2024-07-24 16:46:50.931764] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:54.230 request: 00:29:54.230 { 00:29:54.230 "base_bdev": "BaseBdev1", 00:29:54.230 "raid_bdev": "raid_bdev1", 00:29:54.230 "method": "bdev_raid_add_base_bdev", 00:29:54.230 "req_id": 1 00:29:54.230 } 00:29:54.230 Got JSON-RPC error response 00:29:54.230 response: 00:29:54.230 { 00:29:54.230 "code": -22, 00:29:54.230 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:54.230 } 00:29:54.230 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:29:54.230 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:54.230 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:54.230 16:46:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:54.230 16:46:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.161 16:46:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.418 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:55.418 "name": "raid_bdev1", 00:29:55.418 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:55.418 "strip_size_kb": 0, 00:29:55.418 "state": "online", 00:29:55.418 "raid_level": "raid1", 00:29:55.418 "superblock": true, 00:29:55.419 "num_base_bdevs": 4, 00:29:55.419 "num_base_bdevs_discovered": 2, 00:29:55.419 "num_base_bdevs_operational": 2, 00:29:55.419 "base_bdevs_list": [ 00:29:55.419 { 00:29:55.419 "name": null, 00:29:55.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.419 "is_configured": false, 00:29:55.419 "data_offset": 2048, 00:29:55.419 "data_size": 63488 00:29:55.419 }, 00:29:55.419 { 00:29:55.419 "name": null, 00:29:55.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.419 "is_configured": false, 00:29:55.419 "data_offset": 2048, 00:29:55.419 "data_size": 63488 00:29:55.419 }, 00:29:55.419 { 00:29:55.419 "name": "BaseBdev3", 00:29:55.419 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:55.419 "is_configured": true, 00:29:55.419 "data_offset": 2048, 00:29:55.419 "data_size": 63488 00:29:55.419 }, 00:29:55.419 { 00:29:55.419 "name": "BaseBdev4", 00:29:55.419 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:55.419 "is_configured": true, 00:29:55.419 "data_offset": 2048, 00:29:55.419 "data_size": 63488 00:29:55.419 } 00:29:55.419 ] 00:29:55.419 }' 00:29:55.419 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:55.419 16:46:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.984 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.242 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.242 "name": "raid_bdev1", 00:29:56.242 "uuid": "817775d8-c56d-41eb-bc92-6b7fa5fc546a", 00:29:56.242 "strip_size_kb": 0, 00:29:56.242 "state": "online", 00:29:56.242 "raid_level": "raid1", 00:29:56.242 "superblock": true, 00:29:56.242 "num_base_bdevs": 4, 00:29:56.242 "num_base_bdevs_discovered": 2, 00:29:56.242 "num_base_bdevs_operational": 2, 00:29:56.242 "base_bdevs_list": [ 00:29:56.242 { 00:29:56.242 "name": null, 00:29:56.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.242 "is_configured": false, 00:29:56.242 "data_offset": 2048, 00:29:56.242 "data_size": 63488 00:29:56.242 }, 00:29:56.242 { 00:29:56.242 "name": null, 00:29:56.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.242 "is_configured": false, 00:29:56.242 "data_offset": 2048, 00:29:56.242 "data_size": 63488 00:29:56.242 }, 00:29:56.242 { 00:29:56.242 "name": "BaseBdev3", 00:29:56.242 "uuid": "c246ba77-aa2d-5706-b4c9-172a460746ab", 00:29:56.242 "is_configured": true, 00:29:56.242 "data_offset": 2048, 00:29:56.242 "data_size": 63488 00:29:56.242 }, 00:29:56.242 { 00:29:56.242 "name": "BaseBdev4", 00:29:56.242 "uuid": "79b2151a-a4a8-5cf5-a960-7b4ce91845cd", 00:29:56.242 "is_configured": true, 00:29:56.242 "data_offset": 2048, 00:29:56.242 "data_size": 63488 00:29:56.242 } 00:29:56.242 ] 00:29:56.242 }' 00:29:56.242 16:46:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 1765293 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 1765293 ']' 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 1765293 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.242 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1765293 00:29:56.500 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.500 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.500 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1765293' 00:29:56.500 killing process with pid 1765293 00:29:56.500 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 1765293 00:29:56.500 Received shutdown signal, test time was about 60.000000 seconds 00:29:56.500 00:29:56.500 Latency(us) 00:29:56.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.500 =================================================================================================================== 00:29:56.500 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:56.500 [2024-07-24 16:46:53.133838] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:56.500 [2024-07-24 16:46:53.133967] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:56.500 16:46:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 1765293 00:29:56.500 [2024-07-24 16:46:53.134041] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:56.500 [2024-07-24 16:46:53.134061] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045080 name raid_bdev1, state offline 00:29:57.095 [2024-07-24 16:46:53.711132] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:29:58.995 00:29:58.995 real 0m38.624s 00:29:58.995 user 0m54.969s 00:29:58.995 sys 0m6.418s 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.995 ************************************ 00:29:58.995 END TEST raid_rebuild_test_sb 00:29:58.995 ************************************ 00:29:58.995 16:46:55 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:29:58.995 16:46:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:58.995 16:46:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.995 16:46:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:58.995 ************************************ 00:29:58.995 START TEST raid_rebuild_test_io 00:29:58.995 ************************************ 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=1772034 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 1772034 /var/tmp/spdk-raid.sock 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 1772034 ']' 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:58.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:58.995 16:46:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:58.995 [2024-07-24 16:46:55.660154] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:29:58.995 [2024-07-24 16:46:55.660279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1772034 ] 00:29:58.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:58.995 Zero copy mechanism will not be used. 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.995 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:58.995 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.996 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:58.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.996 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:58.996 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:58.996 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:59.254 [2024-07-24 16:46:55.886258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.512 [2024-07-24 16:46:56.169549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.771 [2024-07-24 16:46:56.514548] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:59.771 [2024-07-24 16:46:56.514585] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:00.028 16:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:00.028 16:46:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:30:00.028 16:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:00.028 16:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:00.286 BaseBdev1_malloc 00:30:00.286 16:46:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:00.544 [2024-07-24 16:46:57.182306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:00.544 [2024-07-24 16:46:57.182376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.544 [2024-07-24 16:46:57.182407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:30:00.544 [2024-07-24 16:46:57.182426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.544 [2024-07-24 16:46:57.185207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.544 [2024-07-24 16:46:57.185246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:00.544 BaseBdev1 00:30:00.544 16:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:00.544 16:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:00.802 BaseBdev2_malloc 00:30:00.802 16:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:01.061 [2024-07-24 16:46:57.688067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:01.061 [2024-07-24 16:46:57.688128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:01.061 [2024-07-24 16:46:57.688166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:30:01.061 [2024-07-24 16:46:57.688188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:01.061 [2024-07-24 16:46:57.690925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:01.061 [2024-07-24 16:46:57.690964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:01.061 BaseBdev2 00:30:01.061 16:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:01.061 16:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:01.319 BaseBdev3_malloc 00:30:01.319 16:46:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:01.577 [2024-07-24 16:46:58.183893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:01.577 [2024-07-24 16:46:58.183963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:01.577 [2024-07-24 16:46:58.183994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:30:01.577 [2024-07-24 16:46:58.184012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:01.577 [2024-07-24 16:46:58.186726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:01.577 [2024-07-24 16:46:58.186763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:01.577 BaseBdev3 00:30:01.577 16:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:01.577 16:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:01.835 BaseBdev4_malloc 00:30:01.835 16:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:01.835 [2024-07-24 16:46:58.694169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:01.835 [2024-07-24 16:46:58.694238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:01.835 [2024-07-24 16:46:58.694266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:30:01.835 [2024-07-24 16:46:58.694287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.094 [2024-07-24 16:46:58.697057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.094 [2024-07-24 16:46:58.697094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:02.094 BaseBdev4 00:30:02.094 16:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:02.352 spare_malloc 00:30:02.352 16:46:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:02.352 spare_delay 00:30:02.610 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:02.610 [2024-07-24 16:46:59.415618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:02.610 [2024-07-24 16:46:59.415680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.610 [2024-07-24 16:46:59.415708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:30:02.610 [2024-07-24 16:46:59.415726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.610 [2024-07-24 16:46:59.418516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.610 [2024-07-24 16:46:59.418553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:02.610 spare 00:30:02.610 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:02.868 [2024-07-24 16:46:59.644276] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:02.869 [2024-07-24 16:46:59.646584] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:02.869 [2024-07-24 16:46:59.646657] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:02.869 [2024-07-24 16:46:59.646723] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:02.869 [2024-07-24 16:46:59.646829] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:30:02.869 [2024-07-24 16:46:59.646847] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:02.869 [2024-07-24 16:46:59.647239] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:30:02.869 [2024-07-24 16:46:59.647501] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:30:02.869 [2024-07-24 16:46:59.647522] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:30:02.869 [2024-07-24 16:46:59.647757] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.869 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.127 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:03.127 "name": "raid_bdev1", 00:30:03.127 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:03.127 "strip_size_kb": 0, 00:30:03.127 "state": "online", 00:30:03.127 "raid_level": "raid1", 00:30:03.127 "superblock": false, 00:30:03.127 "num_base_bdevs": 4, 00:30:03.127 "num_base_bdevs_discovered": 4, 00:30:03.127 "num_base_bdevs_operational": 4, 00:30:03.127 "base_bdevs_list": [ 00:30:03.127 { 00:30:03.127 "name": "BaseBdev1", 00:30:03.127 "uuid": "8d5e3906-0d19-5af1-b491-961eea5f3a7b", 00:30:03.127 "is_configured": true, 00:30:03.127 "data_offset": 0, 00:30:03.127 "data_size": 65536 00:30:03.127 }, 00:30:03.127 { 00:30:03.127 "name": "BaseBdev2", 00:30:03.127 "uuid": "8f351eb1-851e-5fb0-a7e7-fb6dacacebed", 00:30:03.127 "is_configured": true, 00:30:03.127 "data_offset": 0, 00:30:03.127 "data_size": 65536 00:30:03.127 }, 00:30:03.127 { 00:30:03.127 "name": "BaseBdev3", 00:30:03.127 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:03.127 "is_configured": true, 00:30:03.127 "data_offset": 0, 00:30:03.127 "data_size": 65536 00:30:03.127 }, 00:30:03.127 { 00:30:03.127 "name": "BaseBdev4", 00:30:03.127 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:03.127 "is_configured": true, 00:30:03.127 "data_offset": 0, 00:30:03.127 "data_size": 65536 00:30:03.127 } 00:30:03.127 ] 00:30:03.127 }' 00:30:03.127 16:46:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:03.127 16:46:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:03.694 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:03.694 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:03.952 [2024-07-24 16:47:00.679435] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:03.952 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:30:03.952 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.952 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:04.209 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:30:04.209 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:04.209 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:04.209 16:47:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:04.209 [2024-07-24 16:47:01.040574] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:30:04.209 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:04.209 Zero copy mechanism will not be used. 00:30:04.209 Running I/O for 60 seconds... 00:30:04.467 [2024-07-24 16:47:01.136517] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:04.467 [2024-07-24 16:47:01.152093] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.467 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.732 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.732 "name": "raid_bdev1", 00:30:04.732 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:04.732 "strip_size_kb": 0, 00:30:04.732 "state": "online", 00:30:04.732 "raid_level": "raid1", 00:30:04.732 "superblock": false, 00:30:04.732 "num_base_bdevs": 4, 00:30:04.732 "num_base_bdevs_discovered": 3, 00:30:04.732 "num_base_bdevs_operational": 3, 00:30:04.732 "base_bdevs_list": [ 00:30:04.732 { 00:30:04.732 "name": null, 00:30:04.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.732 "is_configured": false, 00:30:04.732 "data_offset": 0, 00:30:04.732 "data_size": 65536 00:30:04.732 }, 00:30:04.732 { 00:30:04.732 "name": "BaseBdev2", 00:30:04.732 "uuid": "8f351eb1-851e-5fb0-a7e7-fb6dacacebed", 00:30:04.732 "is_configured": true, 00:30:04.732 "data_offset": 0, 00:30:04.732 "data_size": 65536 00:30:04.732 }, 00:30:04.732 { 00:30:04.732 "name": "BaseBdev3", 00:30:04.732 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:04.732 "is_configured": true, 00:30:04.732 "data_offset": 0, 00:30:04.732 "data_size": 65536 00:30:04.732 }, 00:30:04.732 { 00:30:04.732 "name": "BaseBdev4", 00:30:04.732 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:04.732 "is_configured": true, 00:30:04.732 "data_offset": 0, 00:30:04.732 "data_size": 65536 00:30:04.732 } 00:30:04.732 ] 00:30:04.732 }' 00:30:04.732 16:47:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.732 16:47:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.298 16:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:05.556 [2024-07-24 16:47:02.235889] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:05.556 16:47:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:05.556 [2024-07-24 16:47:02.311422] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:30:05.556 [2024-07-24 16:47:02.313888] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:05.814 [2024-07-24 16:47:02.424393] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:05.814 [2024-07-24 16:47:02.424785] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:05.814 [2024-07-24 16:47:02.562396] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:05.814 [2024-07-24 16:47:02.563067] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:06.072 [2024-07-24 16:47:02.890000] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:06.072 [2024-07-24 16:47:02.890337] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:06.331 [2024-07-24 16:47:03.033670] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.589 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.846 [2024-07-24 16:47:03.528933] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:06.846 [2024-07-24 16:47:03.529195] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:06.846 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.846 "name": "raid_bdev1", 00:30:06.846 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:06.846 "strip_size_kb": 0, 00:30:06.846 "state": "online", 00:30:06.846 "raid_level": "raid1", 00:30:06.846 "superblock": false, 00:30:06.846 "num_base_bdevs": 4, 00:30:06.846 "num_base_bdevs_discovered": 4, 00:30:06.846 "num_base_bdevs_operational": 4, 00:30:06.846 "process": { 00:30:06.846 "type": "rebuild", 00:30:06.846 "target": "spare", 00:30:06.846 "progress": { 00:30:06.846 "blocks": 14336, 00:30:06.846 "percent": 21 00:30:06.846 } 00:30:06.846 }, 00:30:06.846 "base_bdevs_list": [ 00:30:06.846 { 00:30:06.846 "name": "spare", 00:30:06.846 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:06.846 "is_configured": true, 00:30:06.846 "data_offset": 0, 00:30:06.846 "data_size": 65536 00:30:06.846 }, 00:30:06.846 { 00:30:06.846 "name": "BaseBdev2", 00:30:06.846 "uuid": "8f351eb1-851e-5fb0-a7e7-fb6dacacebed", 00:30:06.846 "is_configured": true, 00:30:06.846 "data_offset": 0, 00:30:06.846 "data_size": 65536 00:30:06.846 }, 00:30:06.846 { 00:30:06.846 "name": "BaseBdev3", 00:30:06.846 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:06.846 "is_configured": true, 00:30:06.846 "data_offset": 0, 00:30:06.846 "data_size": 65536 00:30:06.846 }, 00:30:06.846 { 00:30:06.846 "name": "BaseBdev4", 00:30:06.846 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:06.846 "is_configured": true, 00:30:06.846 "data_offset": 0, 00:30:06.846 "data_size": 65536 00:30:06.846 } 00:30:06.846 ] 00:30:06.846 }' 00:30:06.846 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:06.847 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.847 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.847 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.847 16:47:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:07.104 [2024-07-24 16:47:03.855582] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:07.104 [2024-07-24 16:47:03.950586] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:07.363 [2024-07-24 16:47:03.973153] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:07.363 [2024-07-24 16:47:03.973196] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:07.363 [2024-07-24 16:47:03.973215] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:07.363 [2024-07-24 16:47:04.025520] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.363 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.621 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.621 "name": "raid_bdev1", 00:30:07.621 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:07.621 "strip_size_kb": 0, 00:30:07.621 "state": "online", 00:30:07.621 "raid_level": "raid1", 00:30:07.621 "superblock": false, 00:30:07.621 "num_base_bdevs": 4, 00:30:07.621 "num_base_bdevs_discovered": 3, 00:30:07.621 "num_base_bdevs_operational": 3, 00:30:07.621 "base_bdevs_list": [ 00:30:07.621 { 00:30:07.621 "name": null, 00:30:07.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.621 "is_configured": false, 00:30:07.621 "data_offset": 0, 00:30:07.621 "data_size": 65536 00:30:07.621 }, 00:30:07.621 { 00:30:07.621 "name": "BaseBdev2", 00:30:07.621 "uuid": "8f351eb1-851e-5fb0-a7e7-fb6dacacebed", 00:30:07.621 "is_configured": true, 00:30:07.621 "data_offset": 0, 00:30:07.621 "data_size": 65536 00:30:07.621 }, 00:30:07.621 { 00:30:07.621 "name": "BaseBdev3", 00:30:07.621 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:07.621 "is_configured": true, 00:30:07.621 "data_offset": 0, 00:30:07.621 "data_size": 65536 00:30:07.621 }, 00:30:07.621 { 00:30:07.621 "name": "BaseBdev4", 00:30:07.621 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:07.621 "is_configured": true, 00:30:07.621 "data_offset": 0, 00:30:07.621 "data_size": 65536 00:30:07.621 } 00:30:07.621 ] 00:30:07.621 }' 00:30:07.621 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.621 16:47:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.187 16:47:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:08.445 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:08.445 "name": "raid_bdev1", 00:30:08.445 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:08.445 "strip_size_kb": 0, 00:30:08.445 "state": "online", 00:30:08.445 "raid_level": "raid1", 00:30:08.445 "superblock": false, 00:30:08.445 "num_base_bdevs": 4, 00:30:08.445 "num_base_bdevs_discovered": 3, 00:30:08.445 "num_base_bdevs_operational": 3, 00:30:08.445 "base_bdevs_list": [ 00:30:08.445 { 00:30:08.445 "name": null, 00:30:08.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.445 "is_configured": false, 00:30:08.445 "data_offset": 0, 00:30:08.445 "data_size": 65536 00:30:08.445 }, 00:30:08.445 { 00:30:08.445 "name": "BaseBdev2", 00:30:08.445 "uuid": "8f351eb1-851e-5fb0-a7e7-fb6dacacebed", 00:30:08.445 "is_configured": true, 00:30:08.445 "data_offset": 0, 00:30:08.445 "data_size": 65536 00:30:08.445 }, 00:30:08.445 { 00:30:08.445 "name": "BaseBdev3", 00:30:08.445 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:08.445 "is_configured": true, 00:30:08.445 "data_offset": 0, 00:30:08.445 "data_size": 65536 00:30:08.445 }, 00:30:08.445 { 00:30:08.445 "name": "BaseBdev4", 00:30:08.445 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:08.445 "is_configured": true, 00:30:08.445 "data_offset": 0, 00:30:08.445 "data_size": 65536 00:30:08.445 } 00:30:08.445 ] 00:30:08.445 }' 00:30:08.445 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:08.445 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:08.445 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:08.445 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:08.445 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:08.704 [2024-07-24 16:47:05.461571] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:08.704 16:47:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:08.704 [2024-07-24 16:47:05.542095] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:30:08.704 [2024-07-24 16:47:05.544524] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:08.961 [2024-07-24 16:47:05.669759] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:08.962 [2024-07-24 16:47:05.678773] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:09.220 [2024-07-24 16:47:05.928666] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:09.220 [2024-07-24 16:47:05.929366] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:09.478 [2024-07-24 16:47:06.302057] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:09.737 [2024-07-24 16:47:06.421479] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:09.737 [2024-07-24 16:47:06.422149] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.737 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.997 [2024-07-24 16:47:06.757440] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.997 "name": "raid_bdev1", 00:30:09.997 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:09.997 "strip_size_kb": 0, 00:30:09.997 "state": "online", 00:30:09.997 "raid_level": "raid1", 00:30:09.997 "superblock": false, 00:30:09.997 "num_base_bdevs": 4, 00:30:09.997 "num_base_bdevs_discovered": 4, 00:30:09.997 "num_base_bdevs_operational": 4, 00:30:09.997 "process": { 00:30:09.997 "type": "rebuild", 00:30:09.997 "target": "spare", 00:30:09.997 "progress": { 00:30:09.997 "blocks": 12288, 00:30:09.997 "percent": 18 00:30:09.997 } 00:30:09.997 }, 00:30:09.997 "base_bdevs_list": [ 00:30:09.997 { 00:30:09.997 "name": "spare", 00:30:09.997 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:09.997 "is_configured": true, 00:30:09.997 "data_offset": 0, 00:30:09.997 "data_size": 65536 00:30:09.997 }, 00:30:09.997 { 00:30:09.997 "name": "BaseBdev2", 00:30:09.997 "uuid": "8f351eb1-851e-5fb0-a7e7-fb6dacacebed", 00:30:09.997 "is_configured": true, 00:30:09.997 "data_offset": 0, 00:30:09.997 "data_size": 65536 00:30:09.997 }, 00:30:09.997 { 00:30:09.997 "name": "BaseBdev3", 00:30:09.997 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:09.997 "is_configured": true, 00:30:09.997 "data_offset": 0, 00:30:09.997 "data_size": 65536 00:30:09.997 }, 00:30:09.997 { 00:30:09.997 "name": "BaseBdev4", 00:30:09.997 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:09.997 "is_configured": true, 00:30:09.997 "data_offset": 0, 00:30:09.997 "data_size": 65536 00:30:09.997 } 00:30:09.997 ] 00:30:09.997 }' 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:30:09.997 16:47:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:10.287 [2024-07-24 16:47:07.064223] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:10.560 [2024-07-24 16:47:07.264568] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:30:10.560 [2024-07-24 16:47:07.264613] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.560 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.560 [2024-07-24 16:47:07.395722] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:10.817 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.817 "name": "raid_bdev1", 00:30:10.817 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:10.817 "strip_size_kb": 0, 00:30:10.817 "state": "online", 00:30:10.817 "raid_level": "raid1", 00:30:10.817 "superblock": false, 00:30:10.817 "num_base_bdevs": 4, 00:30:10.817 "num_base_bdevs_discovered": 3, 00:30:10.817 "num_base_bdevs_operational": 3, 00:30:10.817 "process": { 00:30:10.817 "type": "rebuild", 00:30:10.817 "target": "spare", 00:30:10.817 "progress": { 00:30:10.817 "blocks": 22528, 00:30:10.817 "percent": 34 00:30:10.817 } 00:30:10.817 }, 00:30:10.817 "base_bdevs_list": [ 00:30:10.817 { 00:30:10.817 "name": "spare", 00:30:10.817 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:10.818 "is_configured": true, 00:30:10.818 "data_offset": 0, 00:30:10.818 "data_size": 65536 00:30:10.818 }, 00:30:10.818 { 00:30:10.818 "name": null, 00:30:10.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.818 "is_configured": false, 00:30:10.818 "data_offset": 0, 00:30:10.818 "data_size": 65536 00:30:10.818 }, 00:30:10.818 { 00:30:10.818 "name": "BaseBdev3", 00:30:10.818 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:10.818 "is_configured": true, 00:30:10.818 "data_offset": 0, 00:30:10.818 "data_size": 65536 00:30:10.818 }, 00:30:10.818 { 00:30:10.818 "name": "BaseBdev4", 00:30:10.818 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:10.818 "is_configured": true, 00:30:10.818 "data_offset": 0, 00:30:10.818 "data_size": 65536 00:30:10.818 } 00:30:10.818 ] 00:30:10.818 }' 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=1031 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.818 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.075 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:11.075 "name": "raid_bdev1", 00:30:11.075 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:11.075 "strip_size_kb": 0, 00:30:11.075 "state": "online", 00:30:11.075 "raid_level": "raid1", 00:30:11.075 "superblock": false, 00:30:11.075 "num_base_bdevs": 4, 00:30:11.075 "num_base_bdevs_discovered": 3, 00:30:11.075 "num_base_bdevs_operational": 3, 00:30:11.075 "process": { 00:30:11.075 "type": "rebuild", 00:30:11.075 "target": "spare", 00:30:11.075 "progress": { 00:30:11.075 "blocks": 26624, 00:30:11.075 "percent": 40 00:30:11.075 } 00:30:11.075 }, 00:30:11.075 "base_bdevs_list": [ 00:30:11.075 { 00:30:11.075 "name": "spare", 00:30:11.075 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:11.075 "is_configured": true, 00:30:11.075 "data_offset": 0, 00:30:11.075 "data_size": 65536 00:30:11.075 }, 00:30:11.075 { 00:30:11.075 "name": null, 00:30:11.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.075 "is_configured": false, 00:30:11.075 "data_offset": 0, 00:30:11.075 "data_size": 65536 00:30:11.075 }, 00:30:11.075 { 00:30:11.075 "name": "BaseBdev3", 00:30:11.075 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:11.075 "is_configured": true, 00:30:11.075 "data_offset": 0, 00:30:11.075 "data_size": 65536 00:30:11.075 }, 00:30:11.075 { 00:30:11.075 "name": "BaseBdev4", 00:30:11.075 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:11.075 "is_configured": true, 00:30:11.075 "data_offset": 0, 00:30:11.075 "data_size": 65536 00:30:11.075 } 00:30:11.075 ] 00:30:11.075 }' 00:30:11.075 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:11.075 [2024-07-24 16:47:07.885930] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:11.075 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:11.075 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:11.333 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:11.333 16:47:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:11.590 [2024-07-24 16:47:08.252833] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:11.847 [2024-07-24 16:47:08.482690] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:12.105 [2024-07-24 16:47:08.832292] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.105 16:47:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.364 [2024-07-24 16:47:09.053711] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:30:12.364 16:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.364 "name": "raid_bdev1", 00:30:12.364 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:12.364 "strip_size_kb": 0, 00:30:12.364 "state": "online", 00:30:12.364 "raid_level": "raid1", 00:30:12.364 "superblock": false, 00:30:12.364 "num_base_bdevs": 4, 00:30:12.364 "num_base_bdevs_discovered": 3, 00:30:12.364 "num_base_bdevs_operational": 3, 00:30:12.364 "process": { 00:30:12.364 "type": "rebuild", 00:30:12.364 "target": "spare", 00:30:12.364 "progress": { 00:30:12.364 "blocks": 47104, 00:30:12.364 "percent": 71 00:30:12.364 } 00:30:12.364 }, 00:30:12.364 "base_bdevs_list": [ 00:30:12.364 { 00:30:12.364 "name": "spare", 00:30:12.364 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:12.364 "is_configured": true, 00:30:12.364 "data_offset": 0, 00:30:12.364 "data_size": 65536 00:30:12.364 }, 00:30:12.364 { 00:30:12.364 "name": null, 00:30:12.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.364 "is_configured": false, 00:30:12.364 "data_offset": 0, 00:30:12.364 "data_size": 65536 00:30:12.364 }, 00:30:12.364 { 00:30:12.364 "name": "BaseBdev3", 00:30:12.364 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:12.364 "is_configured": true, 00:30:12.364 "data_offset": 0, 00:30:12.364 "data_size": 65536 00:30:12.364 }, 00:30:12.364 { 00:30:12.364 "name": "BaseBdev4", 00:30:12.364 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:12.364 "is_configured": true, 00:30:12.364 "data_offset": 0, 00:30:12.364 "data_size": 65536 00:30:12.364 } 00:30:12.364 ] 00:30:12.364 }' 00:30:12.364 16:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.622 16:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:12.622 16:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.622 16:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:12.622 16:47:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:12.622 [2024-07-24 16:47:09.392908] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:12.622 [2024-07-24 16:47:09.393176] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:12.879 [2024-07-24 16:47:09.603544] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:12.879 [2024-07-24 16:47:09.603692] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.445 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.703 [2024-07-24 16:47:10.368096] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:13.703 [2024-07-24 16:47:10.468309] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:13.703 [2024-07-24 16:47:10.478972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:13.703 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:13.703 "name": "raid_bdev1", 00:30:13.703 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:13.703 "strip_size_kb": 0, 00:30:13.703 "state": "online", 00:30:13.703 "raid_level": "raid1", 00:30:13.703 "superblock": false, 00:30:13.703 "num_base_bdevs": 4, 00:30:13.703 "num_base_bdevs_discovered": 3, 00:30:13.703 "num_base_bdevs_operational": 3, 00:30:13.703 "base_bdevs_list": [ 00:30:13.703 { 00:30:13.703 "name": "spare", 00:30:13.703 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:13.703 "is_configured": true, 00:30:13.703 "data_offset": 0, 00:30:13.703 "data_size": 65536 00:30:13.703 }, 00:30:13.703 { 00:30:13.703 "name": null, 00:30:13.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.703 "is_configured": false, 00:30:13.703 "data_offset": 0, 00:30:13.703 "data_size": 65536 00:30:13.703 }, 00:30:13.703 { 00:30:13.703 "name": "BaseBdev3", 00:30:13.703 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:13.703 "is_configured": true, 00:30:13.703 "data_offset": 0, 00:30:13.703 "data_size": 65536 00:30:13.703 }, 00:30:13.703 { 00:30:13.703 "name": "BaseBdev4", 00:30:13.703 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:13.703 "is_configured": true, 00:30:13.703 "data_offset": 0, 00:30:13.703 "data_size": 65536 00:30:13.703 } 00:30:13.703 ] 00:30:13.703 }' 00:30:13.703 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:13.960 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:13.961 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:13.961 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:13.961 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.961 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.218 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:14.218 "name": "raid_bdev1", 00:30:14.218 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:14.218 "strip_size_kb": 0, 00:30:14.218 "state": "online", 00:30:14.218 "raid_level": "raid1", 00:30:14.218 "superblock": false, 00:30:14.218 "num_base_bdevs": 4, 00:30:14.218 "num_base_bdevs_discovered": 3, 00:30:14.218 "num_base_bdevs_operational": 3, 00:30:14.218 "base_bdevs_list": [ 00:30:14.218 { 00:30:14.218 "name": "spare", 00:30:14.218 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:14.218 "is_configured": true, 00:30:14.218 "data_offset": 0, 00:30:14.218 "data_size": 65536 00:30:14.218 }, 00:30:14.218 { 00:30:14.218 "name": null, 00:30:14.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.218 "is_configured": false, 00:30:14.218 "data_offset": 0, 00:30:14.219 "data_size": 65536 00:30:14.219 }, 00:30:14.219 { 00:30:14.219 "name": "BaseBdev3", 00:30:14.219 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:14.219 "is_configured": true, 00:30:14.219 "data_offset": 0, 00:30:14.219 "data_size": 65536 00:30:14.219 }, 00:30:14.219 { 00:30:14.219 "name": "BaseBdev4", 00:30:14.219 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:14.219 "is_configured": true, 00:30:14.219 "data_offset": 0, 00:30:14.219 "data_size": 65536 00:30:14.219 } 00:30:14.219 ] 00:30:14.219 }' 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.219 16:47:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.477 16:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:14.477 "name": "raid_bdev1", 00:30:14.477 "uuid": "f89c489f-3346-413b-a13b-f98b9b123acd", 00:30:14.477 "strip_size_kb": 0, 00:30:14.477 "state": "online", 00:30:14.477 "raid_level": "raid1", 00:30:14.477 "superblock": false, 00:30:14.477 "num_base_bdevs": 4, 00:30:14.477 "num_base_bdevs_discovered": 3, 00:30:14.477 "num_base_bdevs_operational": 3, 00:30:14.477 "base_bdevs_list": [ 00:30:14.477 { 00:30:14.477 "name": "spare", 00:30:14.477 "uuid": "2ea7f490-3a1b-5d18-bc6b-be4ed6ae9921", 00:30:14.477 "is_configured": true, 00:30:14.477 "data_offset": 0, 00:30:14.477 "data_size": 65536 00:30:14.477 }, 00:30:14.477 { 00:30:14.477 "name": null, 00:30:14.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.477 "is_configured": false, 00:30:14.477 "data_offset": 0, 00:30:14.477 "data_size": 65536 00:30:14.477 }, 00:30:14.477 { 00:30:14.477 "name": "BaseBdev3", 00:30:14.477 "uuid": "227f0825-4329-5a6d-9e15-7cac9b6b6fc8", 00:30:14.477 "is_configured": true, 00:30:14.477 "data_offset": 0, 00:30:14.477 "data_size": 65536 00:30:14.477 }, 00:30:14.477 { 00:30:14.477 "name": "BaseBdev4", 00:30:14.477 "uuid": "5098e970-3609-5d2b-b78b-59ef47dd1921", 00:30:14.477 "is_configured": true, 00:30:14.477 "data_offset": 0, 00:30:14.477 "data_size": 65536 00:30:14.477 } 00:30:14.477 ] 00:30:14.477 }' 00:30:14.477 16:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:14.477 16:47:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:15.042 16:47:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:15.300 [2024-07-24 16:47:11.971601] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:15.300 [2024-07-24 16:47:11.971640] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:15.300 00:30:15.300 Latency(us) 00:30:15.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.300 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:15.300 raid_bdev1 : 10.97 93.04 279.12 0.00 0.00 14317.54 342.43 121634.82 00:30:15.300 =================================================================================================================== 00:30:15.300 Total : 93.04 279.12 0.00 0.00 14317.54 342.43 121634.82 00:30:15.300 [2024-07-24 16:47:12.075379] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:15.300 [2024-07-24 16:47:12.075426] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:15.300 [2024-07-24 16:47:12.075538] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:15.300 [2024-07-24 16:47:12.075557] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, sta0 00:30:15.300 te offline 00:30:15.300 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.300 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:15.558 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:15.816 /dev/nbd0 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:15.816 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:15.816 1+0 records in 00:30:15.816 1+0 records out 00:30:15.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293719 s, 13.9 MB/s 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:15.817 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:16.075 /dev/nbd1 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:16.075 1+0 records in 00:30:16.075 1+0 records out 00:30:16.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265159 s, 15.4 MB/s 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:16.075 16:47:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:16.333 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:16.600 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:16.865 /dev/nbd1 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:16.865 1+0 records in 00:30:16.865 1+0 records out 00:30:16.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221486 s, 18.5 MB/s 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:16.865 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:17.123 16:47:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 1772034 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 1772034 ']' 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 1772034 00:30:17.380 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1772034 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1772034' 00:30:17.638 killing process with pid 1772034 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 1772034 00:30:17.638 Received shutdown signal, test time was about 13.221991 seconds 00:30:17.638 00:30:17.638 Latency(us) 00:30:17.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.638 =================================================================================================================== 00:30:17.638 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:17.638 [2024-07-24 16:47:14.297332] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:17.638 16:47:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 1772034 00:30:18.204 [2024-07-24 16:47:14.800108] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:20.103 16:47:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:30:20.103 00:30:20.103 real 0m21.102s 00:30:20.103 user 0m30.815s 00:30:20.103 sys 0m3.425s 00:30:20.103 16:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:20.103 16:47:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:20.103 ************************************ 00:30:20.103 END TEST raid_rebuild_test_io 00:30:20.103 ************************************ 00:30:20.103 16:47:16 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:20.103 16:47:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:20.103 16:47:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:20.103 16:47:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:20.103 ************************************ 00:30:20.103 START TEST raid_rebuild_test_sb_io 00:30:20.103 ************************************ 00:30:20.103 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:30:20.103 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:20.103 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=1775779 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 1775779 /var/tmp/spdk-raid.sock 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 1775779 ']' 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:20.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:20.104 16:47:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:20.104 [2024-07-24 16:47:16.851734] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:30:20.104 [2024-07-24 16:47:16.851858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775779 ] 00:30:20.104 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:20.104 Zero copy mechanism will not be used. 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:20.363 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:20.363 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:20.363 [2024-07-24 16:47:17.077183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.621 [2024-07-24 16:47:17.340380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.878 [2024-07-24 16:47:17.674597] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:20.878 [2024-07-24 16:47:17.674636] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:21.136 16:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:21.136 16:47:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:30:21.136 16:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:21.136 16:47:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:21.394 BaseBdev1_malloc 00:30:21.394 16:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:21.651 [2024-07-24 16:47:18.341609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:21.651 [2024-07-24 16:47:18.341677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:21.651 [2024-07-24 16:47:18.341710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:30:21.651 [2024-07-24 16:47:18.341729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:21.651 [2024-07-24 16:47:18.344456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:21.651 [2024-07-24 16:47:18.344494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:21.651 BaseBdev1 00:30:21.651 16:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:21.651 16:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:21.907 BaseBdev2_malloc 00:30:21.907 16:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:22.163 [2024-07-24 16:47:18.845761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:22.163 [2024-07-24 16:47:18.845822] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:22.163 [2024-07-24 16:47:18.845850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:30:22.163 [2024-07-24 16:47:18.845873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:22.163 [2024-07-24 16:47:18.848591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:22.163 [2024-07-24 16:47:18.848627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:22.163 BaseBdev2 00:30:22.163 16:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:22.163 16:47:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:22.420 BaseBdev3_malloc 00:30:22.420 16:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:22.676 [2024-07-24 16:47:19.336266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:22.676 [2024-07-24 16:47:19.336334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:22.676 [2024-07-24 16:47:19.336365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:30:22.676 [2024-07-24 16:47:19.336384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:22.676 [2024-07-24 16:47:19.339050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:22.676 [2024-07-24 16:47:19.339085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:22.676 BaseBdev3 00:30:22.676 16:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:22.676 16:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:22.953 BaseBdev4_malloc 00:30:22.953 16:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:23.227 [2024-07-24 16:47:19.820356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:23.227 [2024-07-24 16:47:19.820429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:23.227 [2024-07-24 16:47:19.820460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:30:23.227 [2024-07-24 16:47:19.820478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:23.227 [2024-07-24 16:47:19.823231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:23.227 [2024-07-24 16:47:19.823269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:23.227 BaseBdev4 00:30:23.227 16:47:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:23.484 spare_malloc 00:30:23.484 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:23.484 spare_delay 00:30:23.484 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:23.741 [2024-07-24 16:47:20.529351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:23.741 [2024-07-24 16:47:20.529407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:23.741 [2024-07-24 16:47:20.529436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:30:23.741 [2024-07-24 16:47:20.529454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:23.741 [2024-07-24 16:47:20.532188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:23.741 [2024-07-24 16:47:20.532225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:23.741 spare 00:30:23.741 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:23.998 [2024-07-24 16:47:20.745971] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:23.998 [2024-07-24 16:47:20.748273] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:23.998 [2024-07-24 16:47:20.748343] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:23.998 [2024-07-24 16:47:20.748411] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:23.998 [2024-07-24 16:47:20.748645] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:30:23.998 [2024-07-24 16:47:20.748667] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:23.998 [2024-07-24 16:47:20.748999] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:30:23.998 [2024-07-24 16:47:20.749252] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:30:23.998 [2024-07-24 16:47:20.749269] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:30:23.998 [2024-07-24 16:47:20.749456] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.998 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.255 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.255 "name": "raid_bdev1", 00:30:24.255 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:24.255 "strip_size_kb": 0, 00:30:24.255 "state": "online", 00:30:24.255 "raid_level": "raid1", 00:30:24.255 "superblock": true, 00:30:24.255 "num_base_bdevs": 4, 00:30:24.255 "num_base_bdevs_discovered": 4, 00:30:24.255 "num_base_bdevs_operational": 4, 00:30:24.255 "base_bdevs_list": [ 00:30:24.255 { 00:30:24.256 "name": "BaseBdev1", 00:30:24.256 "uuid": "d63559ff-aab8-5b4a-9fc4-70399127b2bc", 00:30:24.256 "is_configured": true, 00:30:24.256 "data_offset": 2048, 00:30:24.256 "data_size": 63488 00:30:24.256 }, 00:30:24.256 { 00:30:24.256 "name": "BaseBdev2", 00:30:24.256 "uuid": "e4f31354-ffef-5f35-94fb-b6acb7112cae", 00:30:24.256 "is_configured": true, 00:30:24.256 "data_offset": 2048, 00:30:24.256 "data_size": 63488 00:30:24.256 }, 00:30:24.256 { 00:30:24.256 "name": "BaseBdev3", 00:30:24.256 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:24.256 "is_configured": true, 00:30:24.256 "data_offset": 2048, 00:30:24.256 "data_size": 63488 00:30:24.256 }, 00:30:24.256 { 00:30:24.256 "name": "BaseBdev4", 00:30:24.256 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:24.256 "is_configured": true, 00:30:24.256 "data_offset": 2048, 00:30:24.256 "data_size": 63488 00:30:24.256 } 00:30:24.256 ] 00:30:24.256 }' 00:30:24.256 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.256 16:47:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:24.819 16:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:24.819 16:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:25.077 [2024-07-24 16:47:21.769119] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:25.077 16:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:30:25.077 16:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.077 16:47:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:25.335 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:30:25.335 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:25.335 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:25.335 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:25.335 [2024-07-24 16:47:22.148397] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:30:25.335 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:25.335 Zero copy mechanism will not be used. 00:30:25.335 Running I/O for 60 seconds... 00:30:25.592 [2024-07-24 16:47:22.244898] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:25.592 [2024-07-24 16:47:22.245214] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:25.592 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:25.592 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:25.592 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.593 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.850 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.850 "name": "raid_bdev1", 00:30:25.850 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:25.850 "strip_size_kb": 0, 00:30:25.850 "state": "online", 00:30:25.850 "raid_level": "raid1", 00:30:25.850 "superblock": true, 00:30:25.850 "num_base_bdevs": 4, 00:30:25.850 "num_base_bdevs_discovered": 3, 00:30:25.850 "num_base_bdevs_operational": 3, 00:30:25.850 "base_bdevs_list": [ 00:30:25.850 { 00:30:25.850 "name": null, 00:30:25.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.850 "is_configured": false, 00:30:25.850 "data_offset": 2048, 00:30:25.850 "data_size": 63488 00:30:25.850 }, 00:30:25.850 { 00:30:25.850 "name": "BaseBdev2", 00:30:25.850 "uuid": "e4f31354-ffef-5f35-94fb-b6acb7112cae", 00:30:25.850 "is_configured": true, 00:30:25.850 "data_offset": 2048, 00:30:25.850 "data_size": 63488 00:30:25.850 }, 00:30:25.850 { 00:30:25.850 "name": "BaseBdev3", 00:30:25.850 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:25.850 "is_configured": true, 00:30:25.850 "data_offset": 2048, 00:30:25.850 "data_size": 63488 00:30:25.850 }, 00:30:25.850 { 00:30:25.850 "name": "BaseBdev4", 00:30:25.850 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:25.850 "is_configured": true, 00:30:25.850 "data_offset": 2048, 00:30:25.850 "data_size": 63488 00:30:25.850 } 00:30:25.850 ] 00:30:25.850 }' 00:30:25.850 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.850 16:47:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:26.415 16:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:26.678 [2024-07-24 16:47:23.346233] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:26.678 16:47:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:26.678 [2024-07-24 16:47:23.428959] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:30:26.678 [2024-07-24 16:47:23.431425] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:26.937 [2024-07-24 16:47:23.543121] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:26.937 [2024-07-24 16:47:23.544415] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:26.937 [2024-07-24 16:47:23.774901] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:26.937 [2024-07-24 16:47:23.775661] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:27.502 [2024-07-24 16:47:24.157790] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:27.759 [2024-07-24 16:47:24.404667] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:27.759 [2024-07-24 16:47:24.405401] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.759 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.017 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.017 "name": "raid_bdev1", 00:30:28.017 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:28.017 "strip_size_kb": 0, 00:30:28.017 "state": "online", 00:30:28.017 "raid_level": "raid1", 00:30:28.017 "superblock": true, 00:30:28.017 "num_base_bdevs": 4, 00:30:28.017 "num_base_bdevs_discovered": 4, 00:30:28.017 "num_base_bdevs_operational": 4, 00:30:28.017 "process": { 00:30:28.017 "type": "rebuild", 00:30:28.017 "target": "spare", 00:30:28.017 "progress": { 00:30:28.017 "blocks": 10240, 00:30:28.017 "percent": 16 00:30:28.017 } 00:30:28.017 }, 00:30:28.017 "base_bdevs_list": [ 00:30:28.017 { 00:30:28.017 "name": "spare", 00:30:28.017 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:28.017 "is_configured": true, 00:30:28.017 "data_offset": 2048, 00:30:28.017 "data_size": 63488 00:30:28.017 }, 00:30:28.017 { 00:30:28.017 "name": "BaseBdev2", 00:30:28.017 "uuid": "e4f31354-ffef-5f35-94fb-b6acb7112cae", 00:30:28.017 "is_configured": true, 00:30:28.017 "data_offset": 2048, 00:30:28.017 "data_size": 63488 00:30:28.017 }, 00:30:28.017 { 00:30:28.017 "name": "BaseBdev3", 00:30:28.017 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:28.017 "is_configured": true, 00:30:28.017 "data_offset": 2048, 00:30:28.017 "data_size": 63488 00:30:28.017 }, 00:30:28.017 { 00:30:28.017 "name": "BaseBdev4", 00:30:28.017 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:28.017 "is_configured": true, 00:30:28.017 "data_offset": 2048, 00:30:28.017 "data_size": 63488 00:30:28.017 } 00:30:28.017 ] 00:30:28.017 }' 00:30:28.017 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:28.017 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:28.017 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:28.017 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:28.017 16:47:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:28.017 [2024-07-24 16:47:24.772166] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:28.275 [2024-07-24 16:47:24.944752] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:28.275 [2024-07-24 16:47:25.014719] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:28.276 [2024-07-24 16:47:25.024556] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:28.276 [2024-07-24 16:47:25.036331] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:28.276 [2024-07-24 16:47:25.036373] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:28.276 [2024-07-24 16:47:25.036393] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:28.276 [2024-07-24 16:47:25.075128] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.276 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.534 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.534 "name": "raid_bdev1", 00:30:28.534 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:28.534 "strip_size_kb": 0, 00:30:28.534 "state": "online", 00:30:28.534 "raid_level": "raid1", 00:30:28.534 "superblock": true, 00:30:28.534 "num_base_bdevs": 4, 00:30:28.534 "num_base_bdevs_discovered": 3, 00:30:28.534 "num_base_bdevs_operational": 3, 00:30:28.534 "base_bdevs_list": [ 00:30:28.534 { 00:30:28.534 "name": null, 00:30:28.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.534 "is_configured": false, 00:30:28.534 "data_offset": 2048, 00:30:28.534 "data_size": 63488 00:30:28.534 }, 00:30:28.534 { 00:30:28.534 "name": "BaseBdev2", 00:30:28.534 "uuid": "e4f31354-ffef-5f35-94fb-b6acb7112cae", 00:30:28.534 "is_configured": true, 00:30:28.534 "data_offset": 2048, 00:30:28.534 "data_size": 63488 00:30:28.534 }, 00:30:28.534 { 00:30:28.534 "name": "BaseBdev3", 00:30:28.534 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:28.534 "is_configured": true, 00:30:28.534 "data_offset": 2048, 00:30:28.534 "data_size": 63488 00:30:28.534 }, 00:30:28.534 { 00:30:28.534 "name": "BaseBdev4", 00:30:28.534 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:28.534 "is_configured": true, 00:30:28.534 "data_offset": 2048, 00:30:28.534 "data_size": 63488 00:30:28.534 } 00:30:28.534 ] 00:30:28.534 }' 00:30:28.534 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.534 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.468 16:47:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.468 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:29.468 "name": "raid_bdev1", 00:30:29.468 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:29.468 "strip_size_kb": 0, 00:30:29.468 "state": "online", 00:30:29.468 "raid_level": "raid1", 00:30:29.468 "superblock": true, 00:30:29.468 "num_base_bdevs": 4, 00:30:29.468 "num_base_bdevs_discovered": 3, 00:30:29.468 "num_base_bdevs_operational": 3, 00:30:29.468 "base_bdevs_list": [ 00:30:29.468 { 00:30:29.468 "name": null, 00:30:29.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.468 "is_configured": false, 00:30:29.468 "data_offset": 2048, 00:30:29.468 "data_size": 63488 00:30:29.468 }, 00:30:29.468 { 00:30:29.468 "name": "BaseBdev2", 00:30:29.468 "uuid": "e4f31354-ffef-5f35-94fb-b6acb7112cae", 00:30:29.468 "is_configured": true, 00:30:29.468 "data_offset": 2048, 00:30:29.468 "data_size": 63488 00:30:29.468 }, 00:30:29.468 { 00:30:29.468 "name": "BaseBdev3", 00:30:29.468 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:29.468 "is_configured": true, 00:30:29.468 "data_offset": 2048, 00:30:29.468 "data_size": 63488 00:30:29.468 }, 00:30:29.468 { 00:30:29.468 "name": "BaseBdev4", 00:30:29.468 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:29.468 "is_configured": true, 00:30:29.468 "data_offset": 2048, 00:30:29.468 "data_size": 63488 00:30:29.468 } 00:30:29.468 ] 00:30:29.468 }' 00:30:29.468 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:29.468 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:29.468 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:29.468 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:29.468 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:29.726 [2024-07-24 16:47:26.525604] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:29.984 16:47:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:29.984 [2024-07-24 16:47:26.631189] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:30:29.984 [2024-07-24 16:47:26.633640] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:29.984 [2024-07-24 16:47:26.745097] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:29.984 [2024-07-24 16:47:26.746460] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:30.242 [2024-07-24 16:47:26.976864] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:30.242 [2024-07-24 16:47:26.977155] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:30.500 [2024-07-24 16:47:27.311151] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:30.500 [2024-07-24 16:47:27.312338] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:30.758 [2024-07-24 16:47:27.553507] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.758 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:31.017 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:31.017 "name": "raid_bdev1", 00:30:31.017 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:31.017 "strip_size_kb": 0, 00:30:31.017 "state": "online", 00:30:31.017 "raid_level": "raid1", 00:30:31.017 "superblock": true, 00:30:31.017 "num_base_bdevs": 4, 00:30:31.017 "num_base_bdevs_discovered": 4, 00:30:31.017 "num_base_bdevs_operational": 4, 00:30:31.017 "process": { 00:30:31.017 "type": "rebuild", 00:30:31.017 "target": "spare", 00:30:31.017 "progress": { 00:30:31.017 "blocks": 12288, 00:30:31.017 "percent": 19 00:30:31.017 } 00:30:31.017 }, 00:30:31.017 "base_bdevs_list": [ 00:30:31.017 { 00:30:31.017 "name": "spare", 00:30:31.017 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:31.017 "is_configured": true, 00:30:31.017 "data_offset": 2048, 00:30:31.017 "data_size": 63488 00:30:31.017 }, 00:30:31.017 { 00:30:31.017 "name": "BaseBdev2", 00:30:31.017 "uuid": "e4f31354-ffef-5f35-94fb-b6acb7112cae", 00:30:31.017 "is_configured": true, 00:30:31.017 "data_offset": 2048, 00:30:31.017 "data_size": 63488 00:30:31.017 }, 00:30:31.017 { 00:30:31.017 "name": "BaseBdev3", 00:30:31.017 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:31.017 "is_configured": true, 00:30:31.017 "data_offset": 2048, 00:30:31.017 "data_size": 63488 00:30:31.017 }, 00:30:31.017 { 00:30:31.017 "name": "BaseBdev4", 00:30:31.017 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:31.017 "is_configured": true, 00:30:31.017 "data_offset": 2048, 00:30:31.017 "data_size": 63488 00:30:31.017 } 00:30:31.017 ] 00:30:31.017 }' 00:30:31.017 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:31.276 [2024-07-24 16:47:27.918031] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:31.276 [2024-07-24 16:47:27.918460] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:30:31.276 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:30:31.276 16:47:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:31.276 [2024-07-24 16:47:28.059849] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:31.534 [2024-07-24 16:47:28.144236] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:31.793 [2024-07-24 16:47:28.527788] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:30:31.793 [2024-07-24 16:47:28.527834] bdev_raid.c:1945:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:31.793 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.051 [2024-07-24 16:47:28.686383] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.051 "name": "raid_bdev1", 00:30:32.051 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:32.051 "strip_size_kb": 0, 00:30:32.051 "state": "online", 00:30:32.051 "raid_level": "raid1", 00:30:32.051 "superblock": true, 00:30:32.051 "num_base_bdevs": 4, 00:30:32.051 "num_base_bdevs_discovered": 3, 00:30:32.051 "num_base_bdevs_operational": 3, 00:30:32.051 "process": { 00:30:32.051 "type": "rebuild", 00:30:32.051 "target": "spare", 00:30:32.051 "progress": { 00:30:32.051 "blocks": 20480, 00:30:32.051 "percent": 32 00:30:32.051 } 00:30:32.051 }, 00:30:32.051 "base_bdevs_list": [ 00:30:32.051 { 00:30:32.051 "name": "spare", 00:30:32.051 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:32.051 "is_configured": true, 00:30:32.051 "data_offset": 2048, 00:30:32.051 "data_size": 63488 00:30:32.051 }, 00:30:32.051 { 00:30:32.051 "name": null, 00:30:32.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.051 "is_configured": false, 00:30:32.051 "data_offset": 2048, 00:30:32.051 "data_size": 63488 00:30:32.051 }, 00:30:32.051 { 00:30:32.051 "name": "BaseBdev3", 00:30:32.051 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:32.051 "is_configured": true, 00:30:32.051 "data_offset": 2048, 00:30:32.051 "data_size": 63488 00:30:32.051 }, 00:30:32.051 { 00:30:32.051 "name": "BaseBdev4", 00:30:32.051 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:32.051 "is_configured": true, 00:30:32.051 "data_offset": 2048, 00:30:32.051 "data_size": 63488 00:30:32.051 } 00:30:32.051 ] 00:30:32.051 }' 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=1052 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.051 16:47:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.309 [2024-07-24 16:47:29.064005] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:32.309 16:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.309 "name": "raid_bdev1", 00:30:32.309 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:32.309 "strip_size_kb": 0, 00:30:32.309 "state": "online", 00:30:32.309 "raid_level": "raid1", 00:30:32.309 "superblock": true, 00:30:32.309 "num_base_bdevs": 4, 00:30:32.309 "num_base_bdevs_discovered": 3, 00:30:32.309 "num_base_bdevs_operational": 3, 00:30:32.309 "process": { 00:30:32.309 "type": "rebuild", 00:30:32.309 "target": "spare", 00:30:32.309 "progress": { 00:30:32.309 "blocks": 26624, 00:30:32.309 "percent": 41 00:30:32.309 } 00:30:32.309 }, 00:30:32.309 "base_bdevs_list": [ 00:30:32.309 { 00:30:32.309 "name": "spare", 00:30:32.309 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:32.309 "is_configured": true, 00:30:32.309 "data_offset": 2048, 00:30:32.309 "data_size": 63488 00:30:32.309 }, 00:30:32.309 { 00:30:32.309 "name": null, 00:30:32.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.309 "is_configured": false, 00:30:32.309 "data_offset": 2048, 00:30:32.309 "data_size": 63488 00:30:32.309 }, 00:30:32.309 { 00:30:32.309 "name": "BaseBdev3", 00:30:32.309 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:32.309 "is_configured": true, 00:30:32.309 "data_offset": 2048, 00:30:32.309 "data_size": 63488 00:30:32.309 }, 00:30:32.309 { 00:30:32.309 "name": "BaseBdev4", 00:30:32.309 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:32.309 "is_configured": true, 00:30:32.309 "data_offset": 2048, 00:30:32.309 "data_size": 63488 00:30:32.309 } 00:30:32.309 ] 00:30:32.309 }' 00:30:32.309 16:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.309 16:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:32.309 16:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.568 16:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:32.568 16:47:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:32.568 [2024-07-24 16:47:29.414002] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:32.826 [2024-07-24 16:47:29.645572] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.392 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:33.650 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:33.650 "name": "raid_bdev1", 00:30:33.650 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:33.650 "strip_size_kb": 0, 00:30:33.650 "state": "online", 00:30:33.650 "raid_level": "raid1", 00:30:33.650 "superblock": true, 00:30:33.650 "num_base_bdevs": 4, 00:30:33.650 "num_base_bdevs_discovered": 3, 00:30:33.650 "num_base_bdevs_operational": 3, 00:30:33.650 "process": { 00:30:33.650 "type": "rebuild", 00:30:33.650 "target": "spare", 00:30:33.650 "progress": { 00:30:33.650 "blocks": 47104, 00:30:33.650 "percent": 74 00:30:33.650 } 00:30:33.650 }, 00:30:33.650 "base_bdevs_list": [ 00:30:33.650 { 00:30:33.650 "name": "spare", 00:30:33.650 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:33.650 "is_configured": true, 00:30:33.650 "data_offset": 2048, 00:30:33.650 "data_size": 63488 00:30:33.650 }, 00:30:33.650 { 00:30:33.650 "name": null, 00:30:33.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.650 "is_configured": false, 00:30:33.650 "data_offset": 2048, 00:30:33.650 "data_size": 63488 00:30:33.650 }, 00:30:33.650 { 00:30:33.650 "name": "BaseBdev3", 00:30:33.650 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:33.650 "is_configured": true, 00:30:33.650 "data_offset": 2048, 00:30:33.650 "data_size": 63488 00:30:33.650 }, 00:30:33.650 { 00:30:33.650 "name": "BaseBdev4", 00:30:33.650 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:33.650 "is_configured": true, 00:30:33.650 "data_offset": 2048, 00:30:33.650 "data_size": 63488 00:30:33.650 } 00:30:33.650 ] 00:30:33.650 }' 00:30:33.650 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:33.650 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:33.650 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:33.908 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:33.909 16:47:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:33.909 [2024-07-24 16:47:30.694898] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:34.167 [2024-07-24 16:47:30.933473] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:30:34.423 [2024-07-24 16:47:31.153382] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:34.423 [2024-07-24 16:47:31.153707] bdev_raid.c: 851:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:34.681 [2024-07-24 16:47:31.494557] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:34.681 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:34.940 [2024-07-24 16:47:31.602510] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:34.940 [2024-07-24 16:47:31.606401] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:34.940 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:34.940 "name": "raid_bdev1", 00:30:34.940 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:34.940 "strip_size_kb": 0, 00:30:34.940 "state": "online", 00:30:34.940 "raid_level": "raid1", 00:30:34.940 "superblock": true, 00:30:34.940 "num_base_bdevs": 4, 00:30:34.940 "num_base_bdevs_discovered": 3, 00:30:34.940 "num_base_bdevs_operational": 3, 00:30:34.940 "base_bdevs_list": [ 00:30:34.940 { 00:30:34.940 "name": "spare", 00:30:34.940 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:34.940 "is_configured": true, 00:30:34.940 "data_offset": 2048, 00:30:34.940 "data_size": 63488 00:30:34.940 }, 00:30:34.940 { 00:30:34.940 "name": null, 00:30:34.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.940 "is_configured": false, 00:30:34.940 "data_offset": 2048, 00:30:34.940 "data_size": 63488 00:30:34.940 }, 00:30:34.940 { 00:30:34.940 "name": "BaseBdev3", 00:30:34.940 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:34.940 "is_configured": true, 00:30:34.940 "data_offset": 2048, 00:30:34.940 "data_size": 63488 00:30:34.940 }, 00:30:34.940 { 00:30:34.940 "name": "BaseBdev4", 00:30:34.940 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:34.940 "is_configured": true, 00:30:34.940 "data_offset": 2048, 00:30:34.940 "data_size": 63488 00:30:34.940 } 00:30:34.940 ] 00:30:34.940 }' 00:30:34.940 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:34.940 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:34.940 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:35.199 "name": "raid_bdev1", 00:30:35.199 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:35.199 "strip_size_kb": 0, 00:30:35.199 "state": "online", 00:30:35.199 "raid_level": "raid1", 00:30:35.199 "superblock": true, 00:30:35.199 "num_base_bdevs": 4, 00:30:35.199 "num_base_bdevs_discovered": 3, 00:30:35.199 "num_base_bdevs_operational": 3, 00:30:35.199 "base_bdevs_list": [ 00:30:35.199 { 00:30:35.199 "name": "spare", 00:30:35.199 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:35.199 "is_configured": true, 00:30:35.199 "data_offset": 2048, 00:30:35.199 "data_size": 63488 00:30:35.199 }, 00:30:35.199 { 00:30:35.199 "name": null, 00:30:35.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.199 "is_configured": false, 00:30:35.199 "data_offset": 2048, 00:30:35.199 "data_size": 63488 00:30:35.199 }, 00:30:35.199 { 00:30:35.199 "name": "BaseBdev3", 00:30:35.199 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:35.199 "is_configured": true, 00:30:35.199 "data_offset": 2048, 00:30:35.199 "data_size": 63488 00:30:35.199 }, 00:30:35.199 { 00:30:35.199 "name": "BaseBdev4", 00:30:35.199 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:35.199 "is_configured": true, 00:30:35.199 "data_offset": 2048, 00:30:35.199 "data_size": 63488 00:30:35.199 } 00:30:35.199 ] 00:30:35.199 }' 00:30:35.199 16:47:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:35.199 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:35.199 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.480 "name": "raid_bdev1", 00:30:35.480 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:35.480 "strip_size_kb": 0, 00:30:35.480 "state": "online", 00:30:35.480 "raid_level": "raid1", 00:30:35.480 "superblock": true, 00:30:35.480 "num_base_bdevs": 4, 00:30:35.480 "num_base_bdevs_discovered": 3, 00:30:35.480 "num_base_bdevs_operational": 3, 00:30:35.480 "base_bdevs_list": [ 00:30:35.480 { 00:30:35.480 "name": "spare", 00:30:35.480 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:35.480 "is_configured": true, 00:30:35.480 "data_offset": 2048, 00:30:35.480 "data_size": 63488 00:30:35.480 }, 00:30:35.480 { 00:30:35.480 "name": null, 00:30:35.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.480 "is_configured": false, 00:30:35.480 "data_offset": 2048, 00:30:35.480 "data_size": 63488 00:30:35.480 }, 00:30:35.480 { 00:30:35.480 "name": "BaseBdev3", 00:30:35.480 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:35.480 "is_configured": true, 00:30:35.480 "data_offset": 2048, 00:30:35.480 "data_size": 63488 00:30:35.480 }, 00:30:35.480 { 00:30:35.480 "name": "BaseBdev4", 00:30:35.480 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:35.480 "is_configured": true, 00:30:35.480 "data_offset": 2048, 00:30:35.480 "data_size": 63488 00:30:35.480 } 00:30:35.480 ] 00:30:35.480 }' 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.480 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.060 16:47:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:36.319 [2024-07-24 16:47:33.044663] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:36.319 [2024-07-24 16:47:33.044709] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:36.319 00:30:36.319 Latency(us) 00:30:36.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.319 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:36.319 raid_bdev1 : 10.97 93.64 280.93 0.00 0.00 14411.49 337.51 122473.68 00:30:36.319 =================================================================================================================== 00:30:36.319 Total : 93.64 280.93 0.00 0.00 14411.49 337.51 122473.68 00:30:36.319 [2024-07-24 16:47:33.176810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.319 [2024-07-24 16:47:33.176861] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:36.319 [2024-07-24 16:47:33.176980] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:36.319 [2024-07-24 16:47:33.177000] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:30:36.319 0 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.578 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:36.837 /dev/nbd0 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:36.837 1+0 records in 00:30:36.837 1+0 records out 00:30:36.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298418 s, 13.7 MB/s 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:36.837 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:37.096 /dev/nbd1 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:37.096 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:37.355 1+0 records in 00:30:37.355 1+0 records out 00:30:37.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261231 s, 15.7 MB/s 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.355 16:47:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:37.355 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.650 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:37.909 /dev/nbd1 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:37.909 1+0 records in 00:30:37.909 1+0 records out 00:30:37.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293093 s, 14.0 MB/s 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:37.909 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:38.168 16:47:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:38.427 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:38.428 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:38.428 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:38.428 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:38.428 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:38.428 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:30:38.428 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:38.686 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:38.945 [2024-07-24 16:47:35.711830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:38.945 [2024-07-24 16:47:35.711897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:38.945 [2024-07-24 16:47:35.711928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044d80 00:30:38.945 [2024-07-24 16:47:35.711947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:38.945 [2024-07-24 16:47:35.714734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:38.945 [2024-07-24 16:47:35.714772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:38.945 [2024-07-24 16:47:35.714875] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:38.945 [2024-07-24 16:47:35.714948] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:38.945 [2024-07-24 16:47:35.715168] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:38.945 [2024-07-24 16:47:35.715291] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:38.945 spare 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.945 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.204 [2024-07-24 16:47:35.815635] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000045380 00:30:39.204 [2024-07-24 16:47:35.815670] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:39.204 [2024-07-24 16:47:35.816041] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041990 00:30:39.204 [2024-07-24 16:47:35.816341] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000045380 00:30:39.204 [2024-07-24 16:47:35.816358] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000045380 00:30:39.204 [2024-07-24 16:47:35.816605] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:39.204 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.204 "name": "raid_bdev1", 00:30:39.205 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:39.205 "strip_size_kb": 0, 00:30:39.205 "state": "online", 00:30:39.205 "raid_level": "raid1", 00:30:39.205 "superblock": true, 00:30:39.205 "num_base_bdevs": 4, 00:30:39.205 "num_base_bdevs_discovered": 3, 00:30:39.205 "num_base_bdevs_operational": 3, 00:30:39.205 "base_bdevs_list": [ 00:30:39.205 { 00:30:39.205 "name": "spare", 00:30:39.205 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:39.205 "is_configured": true, 00:30:39.205 "data_offset": 2048, 00:30:39.205 "data_size": 63488 00:30:39.205 }, 00:30:39.205 { 00:30:39.205 "name": null, 00:30:39.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.205 "is_configured": false, 00:30:39.205 "data_offset": 2048, 00:30:39.205 "data_size": 63488 00:30:39.205 }, 00:30:39.205 { 00:30:39.205 "name": "BaseBdev3", 00:30:39.205 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:39.205 "is_configured": true, 00:30:39.205 "data_offset": 2048, 00:30:39.205 "data_size": 63488 00:30:39.205 }, 00:30:39.205 { 00:30:39.205 "name": "BaseBdev4", 00:30:39.205 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:39.205 "is_configured": true, 00:30:39.205 "data_offset": 2048, 00:30:39.205 "data_size": 63488 00:30:39.205 } 00:30:39.205 ] 00:30:39.205 }' 00:30:39.205 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.205 16:47:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.772 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:40.034 "name": "raid_bdev1", 00:30:40.034 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:40.034 "strip_size_kb": 0, 00:30:40.034 "state": "online", 00:30:40.034 "raid_level": "raid1", 00:30:40.034 "superblock": true, 00:30:40.034 "num_base_bdevs": 4, 00:30:40.034 "num_base_bdevs_discovered": 3, 00:30:40.034 "num_base_bdevs_operational": 3, 00:30:40.034 "base_bdevs_list": [ 00:30:40.034 { 00:30:40.034 "name": "spare", 00:30:40.034 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:40.034 "is_configured": true, 00:30:40.034 "data_offset": 2048, 00:30:40.034 "data_size": 63488 00:30:40.034 }, 00:30:40.034 { 00:30:40.034 "name": null, 00:30:40.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.034 "is_configured": false, 00:30:40.034 "data_offset": 2048, 00:30:40.034 "data_size": 63488 00:30:40.034 }, 00:30:40.034 { 00:30:40.034 "name": "BaseBdev3", 00:30:40.034 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:40.034 "is_configured": true, 00:30:40.034 "data_offset": 2048, 00:30:40.034 "data_size": 63488 00:30:40.034 }, 00:30:40.034 { 00:30:40.034 "name": "BaseBdev4", 00:30:40.034 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:40.034 "is_configured": true, 00:30:40.034 "data_offset": 2048, 00:30:40.034 "data_size": 63488 00:30:40.034 } 00:30:40.034 ] 00:30:40.034 }' 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.034 16:47:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:40.292 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:30:40.292 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:40.551 [2024-07-24 16:47:37.284808] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.551 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.810 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.810 "name": "raid_bdev1", 00:30:40.810 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:40.810 "strip_size_kb": 0, 00:30:40.810 "state": "online", 00:30:40.810 "raid_level": "raid1", 00:30:40.810 "superblock": true, 00:30:40.810 "num_base_bdevs": 4, 00:30:40.810 "num_base_bdevs_discovered": 2, 00:30:40.810 "num_base_bdevs_operational": 2, 00:30:40.810 "base_bdevs_list": [ 00:30:40.810 { 00:30:40.810 "name": null, 00:30:40.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.810 "is_configured": false, 00:30:40.810 "data_offset": 2048, 00:30:40.810 "data_size": 63488 00:30:40.810 }, 00:30:40.810 { 00:30:40.810 "name": null, 00:30:40.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.810 "is_configured": false, 00:30:40.810 "data_offset": 2048, 00:30:40.810 "data_size": 63488 00:30:40.810 }, 00:30:40.810 { 00:30:40.810 "name": "BaseBdev3", 00:30:40.810 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:40.810 "is_configured": true, 00:30:40.810 "data_offset": 2048, 00:30:40.810 "data_size": 63488 00:30:40.810 }, 00:30:40.810 { 00:30:40.810 "name": "BaseBdev4", 00:30:40.810 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:40.810 "is_configured": true, 00:30:40.810 "data_offset": 2048, 00:30:40.810 "data_size": 63488 00:30:40.810 } 00:30:40.810 ] 00:30:40.810 }' 00:30:40.810 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.810 16:47:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:41.377 16:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:41.637 [2024-07-24 16:47:38.380016] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:41.637 [2024-07-24 16:47:38.380264] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:41.637 [2024-07-24 16:47:38.380293] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:41.637 [2024-07-24 16:47:38.380332] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:41.637 [2024-07-24 16:47:38.402490] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041a60 00:30:41.637 [2024-07-24 16:47:38.404835] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:41.637 16:47:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.574 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.832 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.832 "name": "raid_bdev1", 00:30:42.833 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:42.833 "strip_size_kb": 0, 00:30:42.833 "state": "online", 00:30:42.833 "raid_level": "raid1", 00:30:42.833 "superblock": true, 00:30:42.833 "num_base_bdevs": 4, 00:30:42.833 "num_base_bdevs_discovered": 3, 00:30:42.833 "num_base_bdevs_operational": 3, 00:30:42.833 "process": { 00:30:42.833 "type": "rebuild", 00:30:42.833 "target": "spare", 00:30:42.833 "progress": { 00:30:42.833 "blocks": 24576, 00:30:42.833 "percent": 38 00:30:42.833 } 00:30:42.833 }, 00:30:42.833 "base_bdevs_list": [ 00:30:42.833 { 00:30:42.833 "name": "spare", 00:30:42.833 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:42.833 "is_configured": true, 00:30:42.833 "data_offset": 2048, 00:30:42.833 "data_size": 63488 00:30:42.833 }, 00:30:42.833 { 00:30:42.833 "name": null, 00:30:42.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.833 "is_configured": false, 00:30:42.833 "data_offset": 2048, 00:30:42.833 "data_size": 63488 00:30:42.833 }, 00:30:42.833 { 00:30:42.833 "name": "BaseBdev3", 00:30:42.833 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:42.833 "is_configured": true, 00:30:42.833 "data_offset": 2048, 00:30:42.833 "data_size": 63488 00:30:42.833 }, 00:30:42.833 { 00:30:42.833 "name": "BaseBdev4", 00:30:42.833 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:42.833 "is_configured": true, 00:30:42.833 "data_offset": 2048, 00:30:42.833 "data_size": 63488 00:30:42.833 } 00:30:42.833 ] 00:30:42.833 }' 00:30:42.833 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.833 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:42.833 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.091 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:43.091 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:43.091 [2024-07-24 16:47:39.874230] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:43.091 [2024-07-24 16:47:39.917151] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:43.091 [2024-07-24 16:47:39.917222] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:43.091 [2024-07-24 16:47:39.917244] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:43.091 [2024-07-24 16:47:39.917259] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:43.350 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:43.350 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.351 16:47:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.351 16:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:43.351 "name": "raid_bdev1", 00:30:43.351 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:43.351 "strip_size_kb": 0, 00:30:43.351 "state": "online", 00:30:43.351 "raid_level": "raid1", 00:30:43.351 "superblock": true, 00:30:43.351 "num_base_bdevs": 4, 00:30:43.351 "num_base_bdevs_discovered": 2, 00:30:43.351 "num_base_bdevs_operational": 2, 00:30:43.351 "base_bdevs_list": [ 00:30:43.351 { 00:30:43.351 "name": null, 00:30:43.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.351 "is_configured": false, 00:30:43.351 "data_offset": 2048, 00:30:43.351 "data_size": 63488 00:30:43.351 }, 00:30:43.351 { 00:30:43.351 "name": null, 00:30:43.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.351 "is_configured": false, 00:30:43.351 "data_offset": 2048, 00:30:43.351 "data_size": 63488 00:30:43.351 }, 00:30:43.351 { 00:30:43.351 "name": "BaseBdev3", 00:30:43.351 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:43.351 "is_configured": true, 00:30:43.351 "data_offset": 2048, 00:30:43.351 "data_size": 63488 00:30:43.351 }, 00:30:43.351 { 00:30:43.351 "name": "BaseBdev4", 00:30:43.351 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:43.351 "is_configured": true, 00:30:43.351 "data_offset": 2048, 00:30:43.351 "data_size": 63488 00:30:43.351 } 00:30:43.351 ] 00:30:43.351 }' 00:30:43.351 16:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:43.351 16:47:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.919 16:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:44.197 [2024-07-24 16:47:40.931784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:44.197 [2024-07-24 16:47:40.931859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:44.197 [2024-07-24 16:47:40.931891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045980 00:30:44.197 [2024-07-24 16:47:40.931914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:44.197 [2024-07-24 16:47:40.932560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:44.197 [2024-07-24 16:47:40.932592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:44.197 [2024-07-24 16:47:40.932721] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:44.197 [2024-07-24 16:47:40.932743] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:44.197 [2024-07-24 16:47:40.932760] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:44.197 [2024-07-24 16:47:40.932791] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:44.197 [2024-07-24 16:47:40.956520] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041b30 00:30:44.197 spare 00:30:44.197 [2024-07-24 16:47:40.958859] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:44.197 16:47:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.135 16:47:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.394 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.394 "name": "raid_bdev1", 00:30:45.394 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:45.394 "strip_size_kb": 0, 00:30:45.394 "state": "online", 00:30:45.394 "raid_level": "raid1", 00:30:45.394 "superblock": true, 00:30:45.394 "num_base_bdevs": 4, 00:30:45.394 "num_base_bdevs_discovered": 3, 00:30:45.394 "num_base_bdevs_operational": 3, 00:30:45.394 "process": { 00:30:45.394 "type": "rebuild", 00:30:45.394 "target": "spare", 00:30:45.394 "progress": { 00:30:45.394 "blocks": 22528, 00:30:45.394 "percent": 35 00:30:45.394 } 00:30:45.394 }, 00:30:45.394 "base_bdevs_list": [ 00:30:45.394 { 00:30:45.394 "name": "spare", 00:30:45.394 "uuid": "b0d82da0-2684-547b-88ee-7e0df12cc093", 00:30:45.394 "is_configured": true, 00:30:45.394 "data_offset": 2048, 00:30:45.394 "data_size": 63488 00:30:45.394 }, 00:30:45.394 { 00:30:45.394 "name": null, 00:30:45.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.394 "is_configured": false, 00:30:45.394 "data_offset": 2048, 00:30:45.394 "data_size": 63488 00:30:45.394 }, 00:30:45.394 { 00:30:45.394 "name": "BaseBdev3", 00:30:45.394 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:45.394 "is_configured": true, 00:30:45.394 "data_offset": 2048, 00:30:45.394 "data_size": 63488 00:30:45.394 }, 00:30:45.394 { 00:30:45.394 "name": "BaseBdev4", 00:30:45.394 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:45.394 "is_configured": true, 00:30:45.394 "data_offset": 2048, 00:30:45.394 "data_size": 63488 00:30:45.394 } 00:30:45.394 ] 00:30:45.394 }' 00:30:45.394 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.394 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.394 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.394 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.394 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:45.653 [2024-07-24 16:47:42.360058] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:45.653 [2024-07-24 16:47:42.370469] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:45.653 [2024-07-24 16:47:42.370536] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:45.653 [2024-07-24 16:47:42.370561] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:45.653 [2024-07-24 16:47:42.370573] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:45.653 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:45.653 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:45.653 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:45.653 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.654 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.912 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:45.912 "name": "raid_bdev1", 00:30:45.912 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:45.912 "strip_size_kb": 0, 00:30:45.912 "state": "online", 00:30:45.912 "raid_level": "raid1", 00:30:45.912 "superblock": true, 00:30:45.912 "num_base_bdevs": 4, 00:30:45.912 "num_base_bdevs_discovered": 2, 00:30:45.912 "num_base_bdevs_operational": 2, 00:30:45.912 "base_bdevs_list": [ 00:30:45.912 { 00:30:45.912 "name": null, 00:30:45.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.912 "is_configured": false, 00:30:45.912 "data_offset": 2048, 00:30:45.912 "data_size": 63488 00:30:45.912 }, 00:30:45.912 { 00:30:45.912 "name": null, 00:30:45.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.912 "is_configured": false, 00:30:45.912 "data_offset": 2048, 00:30:45.912 "data_size": 63488 00:30:45.912 }, 00:30:45.912 { 00:30:45.912 "name": "BaseBdev3", 00:30:45.912 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:45.912 "is_configured": true, 00:30:45.912 "data_offset": 2048, 00:30:45.912 "data_size": 63488 00:30:45.912 }, 00:30:45.912 { 00:30:45.912 "name": "BaseBdev4", 00:30:45.912 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:45.912 "is_configured": true, 00:30:45.912 "data_offset": 2048, 00:30:45.912 "data_size": 63488 00:30:45.912 } 00:30:45.912 ] 00:30:45.912 }' 00:30:45.912 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:45.912 16:47:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.478 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.478 "name": "raid_bdev1", 00:30:46.478 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:46.478 "strip_size_kb": 0, 00:30:46.478 "state": "online", 00:30:46.478 "raid_level": "raid1", 00:30:46.478 "superblock": true, 00:30:46.478 "num_base_bdevs": 4, 00:30:46.478 "num_base_bdevs_discovered": 2, 00:30:46.478 "num_base_bdevs_operational": 2, 00:30:46.478 "base_bdevs_list": [ 00:30:46.478 { 00:30:46.478 "name": null, 00:30:46.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.478 "is_configured": false, 00:30:46.478 "data_offset": 2048, 00:30:46.478 "data_size": 63488 00:30:46.478 }, 00:30:46.478 { 00:30:46.478 "name": null, 00:30:46.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.478 "is_configured": false, 00:30:46.478 "data_offset": 2048, 00:30:46.478 "data_size": 63488 00:30:46.478 }, 00:30:46.478 { 00:30:46.478 "name": "BaseBdev3", 00:30:46.478 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:46.478 "is_configured": true, 00:30:46.478 "data_offset": 2048, 00:30:46.478 "data_size": 63488 00:30:46.478 }, 00:30:46.478 { 00:30:46.478 "name": "BaseBdev4", 00:30:46.478 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:46.478 "is_configured": true, 00:30:46.478 "data_offset": 2048, 00:30:46.478 "data_size": 63488 00:30:46.478 } 00:30:46.478 ] 00:30:46.478 }' 00:30:46.735 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:46.735 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:46.735 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.735 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:46.735 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:46.993 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:46.993 [2024-07-24 16:47:43.781328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:46.993 [2024-07-24 16:47:43.781410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:46.993 [2024-07-24 16:47:43.781445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045f80 00:30:46.993 [2024-07-24 16:47:43.781462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:46.993 [2024-07-24 16:47:43.782048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:46.993 [2024-07-24 16:47:43.782078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:46.993 [2024-07-24 16:47:43.782202] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:46.993 [2024-07-24 16:47:43.782226] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:46.993 [2024-07-24 16:47:43.782242] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:46.993 BaseBdev1 00:30:46.993 16:47:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.365 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.366 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.366 16:47:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.366 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.366 "name": "raid_bdev1", 00:30:48.366 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:48.366 "strip_size_kb": 0, 00:30:48.366 "state": "online", 00:30:48.366 "raid_level": "raid1", 00:30:48.366 "superblock": true, 00:30:48.366 "num_base_bdevs": 4, 00:30:48.366 "num_base_bdevs_discovered": 2, 00:30:48.366 "num_base_bdevs_operational": 2, 00:30:48.366 "base_bdevs_list": [ 00:30:48.366 { 00:30:48.366 "name": null, 00:30:48.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.366 "is_configured": false, 00:30:48.366 "data_offset": 2048, 00:30:48.366 "data_size": 63488 00:30:48.366 }, 00:30:48.366 { 00:30:48.366 "name": null, 00:30:48.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.366 "is_configured": false, 00:30:48.366 "data_offset": 2048, 00:30:48.366 "data_size": 63488 00:30:48.366 }, 00:30:48.366 { 00:30:48.366 "name": "BaseBdev3", 00:30:48.366 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:48.366 "is_configured": true, 00:30:48.366 "data_offset": 2048, 00:30:48.366 "data_size": 63488 00:30:48.366 }, 00:30:48.366 { 00:30:48.366 "name": "BaseBdev4", 00:30:48.366 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:48.366 "is_configured": true, 00:30:48.366 "data_offset": 2048, 00:30:48.366 "data_size": 63488 00:30:48.366 } 00:30:48.366 ] 00:30:48.366 }' 00:30:48.366 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.366 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.940 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:49.246 "name": "raid_bdev1", 00:30:49.246 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:49.246 "strip_size_kb": 0, 00:30:49.246 "state": "online", 00:30:49.246 "raid_level": "raid1", 00:30:49.246 "superblock": true, 00:30:49.246 "num_base_bdevs": 4, 00:30:49.246 "num_base_bdevs_discovered": 2, 00:30:49.246 "num_base_bdevs_operational": 2, 00:30:49.246 "base_bdevs_list": [ 00:30:49.246 { 00:30:49.246 "name": null, 00:30:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.246 "is_configured": false, 00:30:49.246 "data_offset": 2048, 00:30:49.246 "data_size": 63488 00:30:49.246 }, 00:30:49.246 { 00:30:49.246 "name": null, 00:30:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.246 "is_configured": false, 00:30:49.246 "data_offset": 2048, 00:30:49.246 "data_size": 63488 00:30:49.246 }, 00:30:49.246 { 00:30:49.246 "name": "BaseBdev3", 00:30:49.246 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:49.246 "is_configured": true, 00:30:49.246 "data_offset": 2048, 00:30:49.246 "data_size": 63488 00:30:49.246 }, 00:30:49.246 { 00:30:49.246 "name": "BaseBdev4", 00:30:49.246 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:49.246 "is_configured": true, 00:30:49.246 "data_offset": 2048, 00:30:49.246 "data_size": 63488 00:30:49.246 } 00:30:49.246 ] 00:30:49.246 }' 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:49.246 16:47:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:49.518 [2024-07-24 16:47:46.116054] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:49.518 [2024-07-24 16:47:46.116246] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:49.518 [2024-07-24 16:47:46.116268] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:49.518 request: 00:30:49.518 { 00:30:49.518 "base_bdev": "BaseBdev1", 00:30:49.518 "raid_bdev": "raid_bdev1", 00:30:49.518 "method": "bdev_raid_add_base_bdev", 00:30:49.518 "req_id": 1 00:30:49.518 } 00:30:49.518 Got JSON-RPC error response 00:30:49.518 response: 00:30:49.518 { 00:30:49.518 "code": -22, 00:30:49.518 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:49.518 } 00:30:49.518 16:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:30:49.518 16:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:49.518 16:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:49.518 16:47:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:49.518 16:47:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.451 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:50.709 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:50.709 "name": "raid_bdev1", 00:30:50.709 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:50.709 "strip_size_kb": 0, 00:30:50.709 "state": "online", 00:30:50.709 "raid_level": "raid1", 00:30:50.709 "superblock": true, 00:30:50.709 "num_base_bdevs": 4, 00:30:50.709 "num_base_bdevs_discovered": 2, 00:30:50.709 "num_base_bdevs_operational": 2, 00:30:50.709 "base_bdevs_list": [ 00:30:50.709 { 00:30:50.709 "name": null, 00:30:50.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.709 "is_configured": false, 00:30:50.709 "data_offset": 2048, 00:30:50.709 "data_size": 63488 00:30:50.709 }, 00:30:50.709 { 00:30:50.709 "name": null, 00:30:50.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.709 "is_configured": false, 00:30:50.709 "data_offset": 2048, 00:30:50.709 "data_size": 63488 00:30:50.709 }, 00:30:50.709 { 00:30:50.709 "name": "BaseBdev3", 00:30:50.709 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:50.709 "is_configured": true, 00:30:50.709 "data_offset": 2048, 00:30:50.709 "data_size": 63488 00:30:50.709 }, 00:30:50.709 { 00:30:50.709 "name": "BaseBdev4", 00:30:50.709 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:50.709 "is_configured": true, 00:30:50.709 "data_offset": 2048, 00:30:50.709 "data_size": 63488 00:30:50.709 } 00:30:50.709 ] 00:30:50.709 }' 00:30:50.709 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:50.709 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.274 16:47:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.274 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:51.274 "name": "raid_bdev1", 00:30:51.274 "uuid": "7088100a-3c8a-45d9-a9ad-1fe8fc026af8", 00:30:51.274 "strip_size_kb": 0, 00:30:51.274 "state": "online", 00:30:51.274 "raid_level": "raid1", 00:30:51.274 "superblock": true, 00:30:51.274 "num_base_bdevs": 4, 00:30:51.274 "num_base_bdevs_discovered": 2, 00:30:51.274 "num_base_bdevs_operational": 2, 00:30:51.274 "base_bdevs_list": [ 00:30:51.274 { 00:30:51.274 "name": null, 00:30:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.274 "is_configured": false, 00:30:51.274 "data_offset": 2048, 00:30:51.274 "data_size": 63488 00:30:51.274 }, 00:30:51.274 { 00:30:51.274 "name": null, 00:30:51.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.274 "is_configured": false, 00:30:51.274 "data_offset": 2048, 00:30:51.274 "data_size": 63488 00:30:51.274 }, 00:30:51.274 { 00:30:51.274 "name": "BaseBdev3", 00:30:51.274 "uuid": "073db14c-8531-5eda-b38a-6d511fd896f8", 00:30:51.274 "is_configured": true, 00:30:51.274 "data_offset": 2048, 00:30:51.274 "data_size": 63488 00:30:51.274 }, 00:30:51.274 { 00:30:51.274 "name": "BaseBdev4", 00:30:51.274 "uuid": "ff9a10aa-74aa-5f6a-9062-5b9ba9492d8a", 00:30:51.274 "is_configured": true, 00:30:51.274 "data_offset": 2048, 00:30:51.274 "data_size": 63488 00:30:51.274 } 00:30:51.274 ] 00:30:51.274 }' 00:30:51.274 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 1775779 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 1775779 ']' 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 1775779 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1775779 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1775779' 00:30:51.532 killing process with pid 1775779 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 1775779 00:30:51.532 Received shutdown signal, test time was about 26.057599 seconds 00:30:51.532 00:30:51.532 Latency(us) 00:30:51.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.532 =================================================================================================================== 00:30:51.532 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.532 [2024-07-24 16:47:48.272065] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:51.532 16:47:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 1775779 00:30:51.532 [2024-07-24 16:47:48.272223] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:51.532 [2024-07-24 16:47:48.272309] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:51.532 [2024-07-24 16:47:48.272326] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000045380 name raid_bdev1, state offline 00:30:52.099 [2024-07-24 16:47:48.794306] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:53.999 16:47:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:30:53.999 00:30:53.999 real 0m33.923s 00:30:53.999 user 0m51.144s 00:30:53.999 sys 0m5.060s 00:30:53.999 16:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.999 16:47:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.999 ************************************ 00:30:53.999 END TEST raid_rebuild_test_sb_io 00:30:53.999 ************************************ 00:30:53.999 16:47:50 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' n == y ']' 00:30:53.999 16:47:50 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:30:53.999 16:47:50 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:30:53.999 16:47:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:53.999 16:47:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.999 16:47:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:53.999 ************************************ 00:30:53.999 START TEST raid_state_function_test_sb_4k 00:30:53.999 ************************************ 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=1781801 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1781801' 00:30:53.999 Process raid pid: 1781801 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 1781801 /var/tmp/spdk-raid.sock 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 1781801 ']' 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:53.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:53.999 16:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:54.257 [2024-07-24 16:47:50.860831] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:30:54.257 [2024-07-24 16:47:50.860944] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:54.257 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:54.257 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:54.258 [2024-07-24 16:47:51.091261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.823 [2024-07-24 16:47:51.384818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.081 [2024-07-24 16:47:51.722848] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:55.081 [2024-07-24 16:47:51.722886] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:55.081 16:47:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:55.081 16:47:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:30:55.081 16:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:55.339 [2024-07-24 16:47:52.097794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:55.339 [2024-07-24 16:47:52.097855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:55.339 [2024-07-24 16:47:52.097870] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:55.339 [2024-07-24 16:47:52.097887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:55.339 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:55.339 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:55.339 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:55.339 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:55.339 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.340 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.598 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.598 "name": "Existed_Raid", 00:30:55.598 "uuid": "3ce32f93-4ef2-4305-9809-67b7a7ddf966", 00:30:55.598 "strip_size_kb": 0, 00:30:55.598 "state": "configuring", 00:30:55.598 "raid_level": "raid1", 00:30:55.598 "superblock": true, 00:30:55.598 "num_base_bdevs": 2, 00:30:55.598 "num_base_bdevs_discovered": 0, 00:30:55.598 "num_base_bdevs_operational": 2, 00:30:55.598 "base_bdevs_list": [ 00:30:55.598 { 00:30:55.598 "name": "BaseBdev1", 00:30:55.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.598 "is_configured": false, 00:30:55.598 "data_offset": 0, 00:30:55.598 "data_size": 0 00:30:55.598 }, 00:30:55.598 { 00:30:55.598 "name": "BaseBdev2", 00:30:55.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:55.598 "is_configured": false, 00:30:55.598 "data_offset": 0, 00:30:55.598 "data_size": 0 00:30:55.598 } 00:30:55.598 ] 00:30:55.598 }' 00:30:55.598 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.598 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:56.163 16:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:56.421 [2024-07-24 16:47:53.084284] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:56.421 [2024-07-24 16:47:53.084325] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:30:56.421 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:56.679 [2024-07-24 16:47:53.312939] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:56.679 [2024-07-24 16:47:53.312983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:56.679 [2024-07-24 16:47:53.312998] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:56.679 [2024-07-24 16:47:53.313015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:56.679 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:30:56.938 [2024-07-24 16:47:53.600111] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:56.938 BaseBdev1 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:56.938 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:57.195 16:47:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:57.453 [ 00:30:57.453 { 00:30:57.453 "name": "BaseBdev1", 00:30:57.453 "aliases": [ 00:30:57.453 "f75b0228-608f-4c55-870e-c45ecd88c1c1" 00:30:57.453 ], 00:30:57.453 "product_name": "Malloc disk", 00:30:57.453 "block_size": 4096, 00:30:57.453 "num_blocks": 8192, 00:30:57.453 "uuid": "f75b0228-608f-4c55-870e-c45ecd88c1c1", 00:30:57.453 "assigned_rate_limits": { 00:30:57.453 "rw_ios_per_sec": 0, 00:30:57.453 "rw_mbytes_per_sec": 0, 00:30:57.453 "r_mbytes_per_sec": 0, 00:30:57.453 "w_mbytes_per_sec": 0 00:30:57.453 }, 00:30:57.453 "claimed": true, 00:30:57.453 "claim_type": "exclusive_write", 00:30:57.453 "zoned": false, 00:30:57.453 "supported_io_types": { 00:30:57.453 "read": true, 00:30:57.453 "write": true, 00:30:57.453 "unmap": true, 00:30:57.453 "flush": true, 00:30:57.453 "reset": true, 00:30:57.453 "nvme_admin": false, 00:30:57.453 "nvme_io": false, 00:30:57.453 "nvme_io_md": false, 00:30:57.453 "write_zeroes": true, 00:30:57.453 "zcopy": true, 00:30:57.453 "get_zone_info": false, 00:30:57.453 "zone_management": false, 00:30:57.453 "zone_append": false, 00:30:57.453 "compare": false, 00:30:57.453 "compare_and_write": false, 00:30:57.453 "abort": true, 00:30:57.453 "seek_hole": false, 00:30:57.453 "seek_data": false, 00:30:57.453 "copy": true, 00:30:57.453 "nvme_iov_md": false 00:30:57.453 }, 00:30:57.453 "memory_domains": [ 00:30:57.453 { 00:30:57.453 "dma_device_id": "system", 00:30:57.453 "dma_device_type": 1 00:30:57.453 }, 00:30:57.453 { 00:30:57.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.453 "dma_device_type": 2 00:30:57.453 } 00:30:57.453 ], 00:30:57.453 "driver_specific": {} 00:30:57.453 } 00:30:57.453 ] 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.453 "name": "Existed_Raid", 00:30:57.453 "uuid": "1c7954ff-cf4d-4bd8-bc6b-6021f563bb66", 00:30:57.453 "strip_size_kb": 0, 00:30:57.453 "state": "configuring", 00:30:57.453 "raid_level": "raid1", 00:30:57.453 "superblock": true, 00:30:57.453 "num_base_bdevs": 2, 00:30:57.453 "num_base_bdevs_discovered": 1, 00:30:57.453 "num_base_bdevs_operational": 2, 00:30:57.453 "base_bdevs_list": [ 00:30:57.453 { 00:30:57.453 "name": "BaseBdev1", 00:30:57.453 "uuid": "f75b0228-608f-4c55-870e-c45ecd88c1c1", 00:30:57.453 "is_configured": true, 00:30:57.453 "data_offset": 256, 00:30:57.453 "data_size": 7936 00:30:57.453 }, 00:30:57.453 { 00:30:57.453 "name": "BaseBdev2", 00:30:57.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.453 "is_configured": false, 00:30:57.453 "data_offset": 0, 00:30:57.453 "data_size": 0 00:30:57.453 } 00:30:57.453 ] 00:30:57.453 }' 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.453 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:58.019 16:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:58.276 [2024-07-24 16:47:55.068130] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:58.276 [2024-07-24 16:47:55.068198] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:30:58.276 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:30:58.534 [2024-07-24 16:47:55.296850] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:58.534 [2024-07-24 16:47:55.299217] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:58.534 [2024-07-24 16:47:55.299262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.534 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:58.792 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.792 "name": "Existed_Raid", 00:30:58.792 "uuid": "0458836c-634a-42aa-a711-6c38669e33ae", 00:30:58.792 "strip_size_kb": 0, 00:30:58.792 "state": "configuring", 00:30:58.792 "raid_level": "raid1", 00:30:58.792 "superblock": true, 00:30:58.792 "num_base_bdevs": 2, 00:30:58.792 "num_base_bdevs_discovered": 1, 00:30:58.792 "num_base_bdevs_operational": 2, 00:30:58.792 "base_bdevs_list": [ 00:30:58.792 { 00:30:58.792 "name": "BaseBdev1", 00:30:58.792 "uuid": "f75b0228-608f-4c55-870e-c45ecd88c1c1", 00:30:58.792 "is_configured": true, 00:30:58.792 "data_offset": 256, 00:30:58.792 "data_size": 7936 00:30:58.792 }, 00:30:58.792 { 00:30:58.792 "name": "BaseBdev2", 00:30:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.792 "is_configured": false, 00:30:58.792 "data_offset": 0, 00:30:58.792 "data_size": 0 00:30:58.792 } 00:30:58.792 ] 00:30:58.792 }' 00:30:58.792 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.792 16:47:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:30:59.357 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:30:59.614 [2024-07-24 16:47:56.374564] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:59.614 [2024-07-24 16:47:56.374854] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:30:59.614 [2024-07-24 16:47:56.374879] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:30:59.614 [2024-07-24 16:47:56.375231] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:30:59.614 [2024-07-24 16:47:56.375465] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:30:59.614 [2024-07-24 16:47:56.375484] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:30:59.614 BaseBdev2 00:30:59.614 [2024-07-24 16:47:56.375689] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:59.614 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:59.871 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:00.128 [ 00:31:00.128 { 00:31:00.128 "name": "BaseBdev2", 00:31:00.128 "aliases": [ 00:31:00.128 "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097" 00:31:00.128 ], 00:31:00.128 "product_name": "Malloc disk", 00:31:00.128 "block_size": 4096, 00:31:00.128 "num_blocks": 8192, 00:31:00.128 "uuid": "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097", 00:31:00.128 "assigned_rate_limits": { 00:31:00.128 "rw_ios_per_sec": 0, 00:31:00.128 "rw_mbytes_per_sec": 0, 00:31:00.128 "r_mbytes_per_sec": 0, 00:31:00.128 "w_mbytes_per_sec": 0 00:31:00.128 }, 00:31:00.128 "claimed": true, 00:31:00.128 "claim_type": "exclusive_write", 00:31:00.128 "zoned": false, 00:31:00.128 "supported_io_types": { 00:31:00.128 "read": true, 00:31:00.128 "write": true, 00:31:00.128 "unmap": true, 00:31:00.128 "flush": true, 00:31:00.128 "reset": true, 00:31:00.128 "nvme_admin": false, 00:31:00.128 "nvme_io": false, 00:31:00.128 "nvme_io_md": false, 00:31:00.128 "write_zeroes": true, 00:31:00.128 "zcopy": true, 00:31:00.128 "get_zone_info": false, 00:31:00.128 "zone_management": false, 00:31:00.128 "zone_append": false, 00:31:00.128 "compare": false, 00:31:00.128 "compare_and_write": false, 00:31:00.128 "abort": true, 00:31:00.128 "seek_hole": false, 00:31:00.128 "seek_data": false, 00:31:00.128 "copy": true, 00:31:00.128 "nvme_iov_md": false 00:31:00.128 }, 00:31:00.128 "memory_domains": [ 00:31:00.128 { 00:31:00.128 "dma_device_id": "system", 00:31:00.128 "dma_device_type": 1 00:31:00.128 }, 00:31:00.128 { 00:31:00.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:00.128 "dma_device_type": 2 00:31:00.128 } 00:31:00.128 ], 00:31:00.128 "driver_specific": {} 00:31:00.128 } 00:31:00.128 ] 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:00.128 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.129 16:47:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:00.386 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:00.386 "name": "Existed_Raid", 00:31:00.386 "uuid": "0458836c-634a-42aa-a711-6c38669e33ae", 00:31:00.386 "strip_size_kb": 0, 00:31:00.386 "state": "online", 00:31:00.386 "raid_level": "raid1", 00:31:00.386 "superblock": true, 00:31:00.386 "num_base_bdevs": 2, 00:31:00.386 "num_base_bdevs_discovered": 2, 00:31:00.386 "num_base_bdevs_operational": 2, 00:31:00.386 "base_bdevs_list": [ 00:31:00.386 { 00:31:00.386 "name": "BaseBdev1", 00:31:00.386 "uuid": "f75b0228-608f-4c55-870e-c45ecd88c1c1", 00:31:00.386 "is_configured": true, 00:31:00.386 "data_offset": 256, 00:31:00.386 "data_size": 7936 00:31:00.386 }, 00:31:00.386 { 00:31:00.386 "name": "BaseBdev2", 00:31:00.386 "uuid": "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097", 00:31:00.386 "is_configured": true, 00:31:00.386 "data_offset": 256, 00:31:00.386 "data_size": 7936 00:31:00.386 } 00:31:00.386 ] 00:31:00.386 }' 00:31:00.386 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:00.386 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:00.952 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:01.209 [2024-07-24 16:47:57.862984] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:01.209 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:01.209 "name": "Existed_Raid", 00:31:01.209 "aliases": [ 00:31:01.209 "0458836c-634a-42aa-a711-6c38669e33ae" 00:31:01.209 ], 00:31:01.209 "product_name": "Raid Volume", 00:31:01.209 "block_size": 4096, 00:31:01.209 "num_blocks": 7936, 00:31:01.209 "uuid": "0458836c-634a-42aa-a711-6c38669e33ae", 00:31:01.209 "assigned_rate_limits": { 00:31:01.209 "rw_ios_per_sec": 0, 00:31:01.209 "rw_mbytes_per_sec": 0, 00:31:01.209 "r_mbytes_per_sec": 0, 00:31:01.209 "w_mbytes_per_sec": 0 00:31:01.209 }, 00:31:01.209 "claimed": false, 00:31:01.209 "zoned": false, 00:31:01.209 "supported_io_types": { 00:31:01.209 "read": true, 00:31:01.209 "write": true, 00:31:01.209 "unmap": false, 00:31:01.209 "flush": false, 00:31:01.209 "reset": true, 00:31:01.209 "nvme_admin": false, 00:31:01.209 "nvme_io": false, 00:31:01.209 "nvme_io_md": false, 00:31:01.209 "write_zeroes": true, 00:31:01.209 "zcopy": false, 00:31:01.209 "get_zone_info": false, 00:31:01.209 "zone_management": false, 00:31:01.209 "zone_append": false, 00:31:01.209 "compare": false, 00:31:01.209 "compare_and_write": false, 00:31:01.209 "abort": false, 00:31:01.209 "seek_hole": false, 00:31:01.209 "seek_data": false, 00:31:01.209 "copy": false, 00:31:01.209 "nvme_iov_md": false 00:31:01.209 }, 00:31:01.209 "memory_domains": [ 00:31:01.209 { 00:31:01.209 "dma_device_id": "system", 00:31:01.209 "dma_device_type": 1 00:31:01.209 }, 00:31:01.209 { 00:31:01.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.209 "dma_device_type": 2 00:31:01.209 }, 00:31:01.209 { 00:31:01.209 "dma_device_id": "system", 00:31:01.209 "dma_device_type": 1 00:31:01.209 }, 00:31:01.209 { 00:31:01.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.209 "dma_device_type": 2 00:31:01.209 } 00:31:01.209 ], 00:31:01.209 "driver_specific": { 00:31:01.209 "raid": { 00:31:01.209 "uuid": "0458836c-634a-42aa-a711-6c38669e33ae", 00:31:01.209 "strip_size_kb": 0, 00:31:01.209 "state": "online", 00:31:01.209 "raid_level": "raid1", 00:31:01.210 "superblock": true, 00:31:01.210 "num_base_bdevs": 2, 00:31:01.210 "num_base_bdevs_discovered": 2, 00:31:01.210 "num_base_bdevs_operational": 2, 00:31:01.210 "base_bdevs_list": [ 00:31:01.210 { 00:31:01.210 "name": "BaseBdev1", 00:31:01.210 "uuid": "f75b0228-608f-4c55-870e-c45ecd88c1c1", 00:31:01.210 "is_configured": true, 00:31:01.210 "data_offset": 256, 00:31:01.210 "data_size": 7936 00:31:01.210 }, 00:31:01.210 { 00:31:01.210 "name": "BaseBdev2", 00:31:01.210 "uuid": "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097", 00:31:01.210 "is_configured": true, 00:31:01.210 "data_offset": 256, 00:31:01.210 "data_size": 7936 00:31:01.210 } 00:31:01.210 ] 00:31:01.210 } 00:31:01.210 } 00:31:01.210 }' 00:31:01.210 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:01.210 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:01.210 BaseBdev2' 00:31:01.210 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:01.210 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:01.210 16:47:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:01.467 "name": "BaseBdev1", 00:31:01.467 "aliases": [ 00:31:01.467 "f75b0228-608f-4c55-870e-c45ecd88c1c1" 00:31:01.467 ], 00:31:01.467 "product_name": "Malloc disk", 00:31:01.467 "block_size": 4096, 00:31:01.467 "num_blocks": 8192, 00:31:01.467 "uuid": "f75b0228-608f-4c55-870e-c45ecd88c1c1", 00:31:01.467 "assigned_rate_limits": { 00:31:01.467 "rw_ios_per_sec": 0, 00:31:01.467 "rw_mbytes_per_sec": 0, 00:31:01.467 "r_mbytes_per_sec": 0, 00:31:01.467 "w_mbytes_per_sec": 0 00:31:01.467 }, 00:31:01.467 "claimed": true, 00:31:01.467 "claim_type": "exclusive_write", 00:31:01.467 "zoned": false, 00:31:01.467 "supported_io_types": { 00:31:01.467 "read": true, 00:31:01.467 "write": true, 00:31:01.467 "unmap": true, 00:31:01.467 "flush": true, 00:31:01.467 "reset": true, 00:31:01.467 "nvme_admin": false, 00:31:01.467 "nvme_io": false, 00:31:01.467 "nvme_io_md": false, 00:31:01.467 "write_zeroes": true, 00:31:01.467 "zcopy": true, 00:31:01.467 "get_zone_info": false, 00:31:01.467 "zone_management": false, 00:31:01.467 "zone_append": false, 00:31:01.467 "compare": false, 00:31:01.467 "compare_and_write": false, 00:31:01.467 "abort": true, 00:31:01.467 "seek_hole": false, 00:31:01.467 "seek_data": false, 00:31:01.467 "copy": true, 00:31:01.467 "nvme_iov_md": false 00:31:01.467 }, 00:31:01.467 "memory_domains": [ 00:31:01.467 { 00:31:01.467 "dma_device_id": "system", 00:31:01.467 "dma_device_type": 1 00:31:01.467 }, 00:31:01.467 { 00:31:01.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.467 "dma_device_type": 2 00:31:01.467 } 00:31:01.467 ], 00:31:01.467 "driver_specific": {} 00:31:01.467 }' 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:01.467 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:01.725 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:01.982 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:01.982 "name": "BaseBdev2", 00:31:01.982 "aliases": [ 00:31:01.982 "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097" 00:31:01.982 ], 00:31:01.982 "product_name": "Malloc disk", 00:31:01.982 "block_size": 4096, 00:31:01.982 "num_blocks": 8192, 00:31:01.982 "uuid": "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097", 00:31:01.982 "assigned_rate_limits": { 00:31:01.982 "rw_ios_per_sec": 0, 00:31:01.982 "rw_mbytes_per_sec": 0, 00:31:01.982 "r_mbytes_per_sec": 0, 00:31:01.982 "w_mbytes_per_sec": 0 00:31:01.982 }, 00:31:01.982 "claimed": true, 00:31:01.982 "claim_type": "exclusive_write", 00:31:01.982 "zoned": false, 00:31:01.982 "supported_io_types": { 00:31:01.982 "read": true, 00:31:01.982 "write": true, 00:31:01.982 "unmap": true, 00:31:01.982 "flush": true, 00:31:01.982 "reset": true, 00:31:01.982 "nvme_admin": false, 00:31:01.982 "nvme_io": false, 00:31:01.982 "nvme_io_md": false, 00:31:01.982 "write_zeroes": true, 00:31:01.982 "zcopy": true, 00:31:01.982 "get_zone_info": false, 00:31:01.982 "zone_management": false, 00:31:01.982 "zone_append": false, 00:31:01.982 "compare": false, 00:31:01.983 "compare_and_write": false, 00:31:01.983 "abort": true, 00:31:01.983 "seek_hole": false, 00:31:01.983 "seek_data": false, 00:31:01.983 "copy": true, 00:31:01.983 "nvme_iov_md": false 00:31:01.983 }, 00:31:01.983 "memory_domains": [ 00:31:01.983 { 00:31:01.983 "dma_device_id": "system", 00:31:01.983 "dma_device_type": 1 00:31:01.983 }, 00:31:01.983 { 00:31:01.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.983 "dma_device_type": 2 00:31:01.983 } 00:31:01.983 ], 00:31:01.983 "driver_specific": {} 00:31:01.983 }' 00:31:01.983 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:01.983 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:01.983 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:01.983 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:02.240 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:02.240 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:02.240 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:02.240 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:02.240 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:02.240 16:47:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:02.240 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:02.240 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:02.240 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:02.497 [2024-07-24 16:47:59.274502] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:02.497 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:02.795 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:02.795 "name": "Existed_Raid", 00:31:02.795 "uuid": "0458836c-634a-42aa-a711-6c38669e33ae", 00:31:02.795 "strip_size_kb": 0, 00:31:02.795 "state": "online", 00:31:02.795 "raid_level": "raid1", 00:31:02.795 "superblock": true, 00:31:02.795 "num_base_bdevs": 2, 00:31:02.795 "num_base_bdevs_discovered": 1, 00:31:02.795 "num_base_bdevs_operational": 1, 00:31:02.795 "base_bdevs_list": [ 00:31:02.795 { 00:31:02.795 "name": null, 00:31:02.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.795 "is_configured": false, 00:31:02.795 "data_offset": 256, 00:31:02.795 "data_size": 7936 00:31:02.795 }, 00:31:02.795 { 00:31:02.795 "name": "BaseBdev2", 00:31:02.795 "uuid": "5d8f6800-c5a6-4979-a9b0-bdc23a3e0097", 00:31:02.795 "is_configured": true, 00:31:02.795 "data_offset": 256, 00:31:02.795 "data_size": 7936 00:31:02.795 } 00:31:02.795 ] 00:31:02.795 }' 00:31:02.795 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:02.795 16:47:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:03.360 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:03.360 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:03.360 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.360 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:03.617 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:03.617 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:03.617 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:03.875 [2024-07-24 16:48:00.585993] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:03.875 [2024-07-24 16:48:00.586124] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:03.875 [2024-07-24 16:48:00.714970] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:03.875 [2024-07-24 16:48:00.715030] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:03.875 [2024-07-24 16:48:00.715112] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:31:03.875 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:03.875 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 1781801 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 1781801 ']' 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 1781801 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:04.133 16:48:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1781801 00:31:04.390 16:48:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:04.390 16:48:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:04.390 16:48:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1781801' 00:31:04.390 killing process with pid 1781801 00:31:04.390 16:48:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 1781801 00:31:04.390 [2024-07-24 16:48:01.024478] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:04.390 16:48:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 1781801 00:31:04.390 [2024-07-24 16:48:01.047347] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:06.289 16:48:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:31:06.289 00:31:06.289 real 0m11.974s 00:31:06.289 user 0m19.569s 00:31:06.289 sys 0m2.120s 00:31:06.289 16:48:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:06.289 16:48:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:06.289 ************************************ 00:31:06.289 END TEST raid_state_function_test_sb_4k 00:31:06.289 ************************************ 00:31:06.289 16:48:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:31:06.289 16:48:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:06.289 16:48:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:06.289 16:48:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:06.289 ************************************ 00:31:06.289 START TEST raid_superblock_test_4k 00:31:06.289 ************************************ 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=1784006 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 1784006 /var/tmp/spdk-raid.sock 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 1784006 ']' 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:06.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:06.289 16:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:06.289 [2024-07-24 16:48:02.924255] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:31:06.289 [2024-07-24 16:48:02.924372] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784006 ] 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.289 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:06.289 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:06.290 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:06.290 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:06.290 [2024-07-24 16:48:03.136900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.547 [2024-07-24 16:48:03.399948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.112 [2024-07-24 16:48:03.740495] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:07.112 [2024-07-24 16:48:03.740537] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:07.112 16:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:31:07.370 malloc1 00:31:07.370 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:07.627 [2024-07-24 16:48:04.308417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:07.627 [2024-07-24 16:48:04.308478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:07.627 [2024-07-24 16:48:04.308509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:31:07.627 [2024-07-24 16:48:04.308526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:07.627 [2024-07-24 16:48:04.311081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:07.627 [2024-07-24 16:48:04.311117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:07.627 pt1 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:07.627 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:31:07.885 malloc2 00:31:07.885 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:07.885 [2024-07-24 16:48:04.705301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:07.885 [2024-07-24 16:48:04.705355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:07.885 [2024-07-24 16:48:04.705384] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:31:07.885 [2024-07-24 16:48:04.705401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:07.885 [2024-07-24 16:48:04.708135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:07.885 [2024-07-24 16:48:04.708185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:07.885 pt2 00:31:07.885 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:07.885 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:07.885 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:08.143 [2024-07-24 16:48:04.881797] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:08.143 [2024-07-24 16:48:04.884102] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:08.143 [2024-07-24 16:48:04.884336] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:31:08.143 [2024-07-24 16:48:04.884358] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:08.143 [2024-07-24 16:48:04.884693] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:31:08.143 [2024-07-24 16:48:04.884945] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:31:08.143 [2024-07-24 16:48:04.884964] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:31:08.143 [2024-07-24 16:48:04.885158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.143 16:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.401 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:08.401 "name": "raid_bdev1", 00:31:08.401 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:08.401 "strip_size_kb": 0, 00:31:08.401 "state": "online", 00:31:08.401 "raid_level": "raid1", 00:31:08.401 "superblock": true, 00:31:08.401 "num_base_bdevs": 2, 00:31:08.401 "num_base_bdevs_discovered": 2, 00:31:08.401 "num_base_bdevs_operational": 2, 00:31:08.401 "base_bdevs_list": [ 00:31:08.401 { 00:31:08.401 "name": "pt1", 00:31:08.401 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:08.401 "is_configured": true, 00:31:08.401 "data_offset": 256, 00:31:08.401 "data_size": 7936 00:31:08.401 }, 00:31:08.401 { 00:31:08.401 "name": "pt2", 00:31:08.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:08.401 "is_configured": true, 00:31:08.401 "data_offset": 256, 00:31:08.401 "data_size": 7936 00:31:08.401 } 00:31:08.401 ] 00:31:08.401 }' 00:31:08.401 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:08.401 16:48:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:08.966 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:09.224 [2024-07-24 16:48:05.868732] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:09.224 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:09.224 "name": "raid_bdev1", 00:31:09.224 "aliases": [ 00:31:09.224 "77f421a1-6584-4a68-af61-a1b9b57305f7" 00:31:09.224 ], 00:31:09.224 "product_name": "Raid Volume", 00:31:09.224 "block_size": 4096, 00:31:09.224 "num_blocks": 7936, 00:31:09.224 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:09.224 "assigned_rate_limits": { 00:31:09.224 "rw_ios_per_sec": 0, 00:31:09.224 "rw_mbytes_per_sec": 0, 00:31:09.224 "r_mbytes_per_sec": 0, 00:31:09.224 "w_mbytes_per_sec": 0 00:31:09.224 }, 00:31:09.224 "claimed": false, 00:31:09.224 "zoned": false, 00:31:09.224 "supported_io_types": { 00:31:09.224 "read": true, 00:31:09.224 "write": true, 00:31:09.224 "unmap": false, 00:31:09.224 "flush": false, 00:31:09.224 "reset": true, 00:31:09.224 "nvme_admin": false, 00:31:09.224 "nvme_io": false, 00:31:09.224 "nvme_io_md": false, 00:31:09.224 "write_zeroes": true, 00:31:09.224 "zcopy": false, 00:31:09.224 "get_zone_info": false, 00:31:09.224 "zone_management": false, 00:31:09.224 "zone_append": false, 00:31:09.224 "compare": false, 00:31:09.224 "compare_and_write": false, 00:31:09.224 "abort": false, 00:31:09.224 "seek_hole": false, 00:31:09.224 "seek_data": false, 00:31:09.224 "copy": false, 00:31:09.224 "nvme_iov_md": false 00:31:09.224 }, 00:31:09.224 "memory_domains": [ 00:31:09.224 { 00:31:09.224 "dma_device_id": "system", 00:31:09.224 "dma_device_type": 1 00:31:09.224 }, 00:31:09.224 { 00:31:09.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:09.224 "dma_device_type": 2 00:31:09.224 }, 00:31:09.224 { 00:31:09.224 "dma_device_id": "system", 00:31:09.224 "dma_device_type": 1 00:31:09.224 }, 00:31:09.224 { 00:31:09.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:09.224 "dma_device_type": 2 00:31:09.224 } 00:31:09.224 ], 00:31:09.224 "driver_specific": { 00:31:09.224 "raid": { 00:31:09.224 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:09.224 "strip_size_kb": 0, 00:31:09.224 "state": "online", 00:31:09.224 "raid_level": "raid1", 00:31:09.224 "superblock": true, 00:31:09.224 "num_base_bdevs": 2, 00:31:09.224 "num_base_bdevs_discovered": 2, 00:31:09.224 "num_base_bdevs_operational": 2, 00:31:09.224 "base_bdevs_list": [ 00:31:09.224 { 00:31:09.224 "name": "pt1", 00:31:09.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:09.224 "is_configured": true, 00:31:09.225 "data_offset": 256, 00:31:09.225 "data_size": 7936 00:31:09.225 }, 00:31:09.225 { 00:31:09.225 "name": "pt2", 00:31:09.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:09.225 "is_configured": true, 00:31:09.225 "data_offset": 256, 00:31:09.225 "data_size": 7936 00:31:09.225 } 00:31:09.225 ] 00:31:09.225 } 00:31:09.225 } 00:31:09.225 }' 00:31:09.225 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:09.225 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:09.225 pt2' 00:31:09.225 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:09.225 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:09.225 16:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:09.483 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:09.483 "name": "pt1", 00:31:09.483 "aliases": [ 00:31:09.483 "00000000-0000-0000-0000-000000000001" 00:31:09.483 ], 00:31:09.483 "product_name": "passthru", 00:31:09.483 "block_size": 4096, 00:31:09.483 "num_blocks": 8192, 00:31:09.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:09.483 "assigned_rate_limits": { 00:31:09.483 "rw_ios_per_sec": 0, 00:31:09.483 "rw_mbytes_per_sec": 0, 00:31:09.483 "r_mbytes_per_sec": 0, 00:31:09.483 "w_mbytes_per_sec": 0 00:31:09.483 }, 00:31:09.483 "claimed": true, 00:31:09.483 "claim_type": "exclusive_write", 00:31:09.483 "zoned": false, 00:31:09.484 "supported_io_types": { 00:31:09.484 "read": true, 00:31:09.484 "write": true, 00:31:09.484 "unmap": true, 00:31:09.484 "flush": true, 00:31:09.484 "reset": true, 00:31:09.484 "nvme_admin": false, 00:31:09.484 "nvme_io": false, 00:31:09.484 "nvme_io_md": false, 00:31:09.484 "write_zeroes": true, 00:31:09.484 "zcopy": true, 00:31:09.484 "get_zone_info": false, 00:31:09.484 "zone_management": false, 00:31:09.484 "zone_append": false, 00:31:09.484 "compare": false, 00:31:09.484 "compare_and_write": false, 00:31:09.484 "abort": true, 00:31:09.484 "seek_hole": false, 00:31:09.484 "seek_data": false, 00:31:09.484 "copy": true, 00:31:09.484 "nvme_iov_md": false 00:31:09.484 }, 00:31:09.484 "memory_domains": [ 00:31:09.484 { 00:31:09.484 "dma_device_id": "system", 00:31:09.484 "dma_device_type": 1 00:31:09.484 }, 00:31:09.484 { 00:31:09.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:09.484 "dma_device_type": 2 00:31:09.484 } 00:31:09.484 ], 00:31:09.484 "driver_specific": { 00:31:09.484 "passthru": { 00:31:09.484 "name": "pt1", 00:31:09.484 "base_bdev_name": "malloc1" 00:31:09.484 } 00:31:09.484 } 00:31:09.484 }' 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:09.484 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:09.743 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:10.002 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:10.002 "name": "pt2", 00:31:10.002 "aliases": [ 00:31:10.002 "00000000-0000-0000-0000-000000000002" 00:31:10.002 ], 00:31:10.002 "product_name": "passthru", 00:31:10.002 "block_size": 4096, 00:31:10.002 "num_blocks": 8192, 00:31:10.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:10.002 "assigned_rate_limits": { 00:31:10.002 "rw_ios_per_sec": 0, 00:31:10.002 "rw_mbytes_per_sec": 0, 00:31:10.002 "r_mbytes_per_sec": 0, 00:31:10.002 "w_mbytes_per_sec": 0 00:31:10.002 }, 00:31:10.002 "claimed": true, 00:31:10.002 "claim_type": "exclusive_write", 00:31:10.002 "zoned": false, 00:31:10.002 "supported_io_types": { 00:31:10.002 "read": true, 00:31:10.002 "write": true, 00:31:10.002 "unmap": true, 00:31:10.002 "flush": true, 00:31:10.002 "reset": true, 00:31:10.002 "nvme_admin": false, 00:31:10.002 "nvme_io": false, 00:31:10.002 "nvme_io_md": false, 00:31:10.002 "write_zeroes": true, 00:31:10.003 "zcopy": true, 00:31:10.003 "get_zone_info": false, 00:31:10.003 "zone_management": false, 00:31:10.003 "zone_append": false, 00:31:10.003 "compare": false, 00:31:10.003 "compare_and_write": false, 00:31:10.003 "abort": true, 00:31:10.003 "seek_hole": false, 00:31:10.003 "seek_data": false, 00:31:10.003 "copy": true, 00:31:10.003 "nvme_iov_md": false 00:31:10.003 }, 00:31:10.003 "memory_domains": [ 00:31:10.003 { 00:31:10.003 "dma_device_id": "system", 00:31:10.003 "dma_device_type": 1 00:31:10.003 }, 00:31:10.003 { 00:31:10.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:10.003 "dma_device_type": 2 00:31:10.003 } 00:31:10.003 ], 00:31:10.003 "driver_specific": { 00:31:10.003 "passthru": { 00:31:10.003 "name": "pt2", 00:31:10.003 "base_bdev_name": "malloc2" 00:31:10.003 } 00:31:10.003 } 00:31:10.003 }' 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:10.003 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:10.261 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:10.261 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:10.262 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:10.262 16:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:10.262 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:10.262 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:31:10.262 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:10.521 [2024-07-24 16:48:07.176295] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:10.521 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=77f421a1-6584-4a68-af61-a1b9b57305f7 00:31:10.521 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 77f421a1-6584-4a68-af61-a1b9b57305f7 ']' 00:31:10.521 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:10.521 [2024-07-24 16:48:07.352453] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:10.521 [2024-07-24 16:48:07.352483] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:10.521 [2024-07-24 16:48:07.352567] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:10.521 [2024-07-24 16:48:07.352640] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:10.521 [2024-07-24 16:48:07.352666] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:31:10.780 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.780 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:31:10.780 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:31:10.780 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:31:10.780 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:10.780 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:11.040 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:11.040 16:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:11.300 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:11.300 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:11.559 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:11.818 [2024-07-24 16:48:08.499608] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:11.818 [2024-07-24 16:48:08.501929] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:11.818 [2024-07-24 16:48:08.502006] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:11.818 [2024-07-24 16:48:08.502069] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:11.818 [2024-07-24 16:48:08.502093] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:11.818 [2024-07-24 16:48:08.502109] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:31:11.818 request: 00:31:11.818 { 00:31:11.818 "name": "raid_bdev1", 00:31:11.818 "raid_level": "raid1", 00:31:11.818 "base_bdevs": [ 00:31:11.818 "malloc1", 00:31:11.818 "malloc2" 00:31:11.818 ], 00:31:11.818 "superblock": false, 00:31:11.818 "method": "bdev_raid_create", 00:31:11.818 "req_id": 1 00:31:11.818 } 00:31:11.818 Got JSON-RPC error response 00:31:11.818 response: 00:31:11.818 { 00:31:11.818 "code": -17, 00:31:11.818 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:11.818 } 00:31:11.818 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:31:11.818 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:11.818 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:11.818 16:48:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:11.818 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.818 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:31:12.077 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:31:12.077 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:31:12.077 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:12.336 [2024-07-24 16:48:08.948960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:12.336 [2024-07-24 16:48:08.949025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:12.336 [2024-07-24 16:48:08.949050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:31:12.336 [2024-07-24 16:48:08.949068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:12.337 [2024-07-24 16:48:08.951848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:12.337 [2024-07-24 16:48:08.951887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:12.337 [2024-07-24 16:48:08.951981] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:12.337 [2024-07-24 16:48:08.952082] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:12.337 pt1 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.337 16:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:12.595 16:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:12.595 "name": "raid_bdev1", 00:31:12.595 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:12.595 "strip_size_kb": 0, 00:31:12.595 "state": "configuring", 00:31:12.595 "raid_level": "raid1", 00:31:12.595 "superblock": true, 00:31:12.595 "num_base_bdevs": 2, 00:31:12.595 "num_base_bdevs_discovered": 1, 00:31:12.595 "num_base_bdevs_operational": 2, 00:31:12.595 "base_bdevs_list": [ 00:31:12.595 { 00:31:12.595 "name": "pt1", 00:31:12.595 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:12.595 "is_configured": true, 00:31:12.595 "data_offset": 256, 00:31:12.595 "data_size": 7936 00:31:12.595 }, 00:31:12.595 { 00:31:12.595 "name": null, 00:31:12.595 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:12.595 "is_configured": false, 00:31:12.595 "data_offset": 256, 00:31:12.595 "data_size": 7936 00:31:12.595 } 00:31:12.595 ] 00:31:12.595 }' 00:31:12.595 16:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:12.595 16:48:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:13.163 16:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:31:13.163 16:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:31:13.163 16:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:13.163 16:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:13.163 [2024-07-24 16:48:09.991797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:13.163 [2024-07-24 16:48:09.991867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.163 [2024-07-24 16:48:09.991893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:31:13.163 [2024-07-24 16:48:09.991912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.163 [2024-07-24 16:48:09.992506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.163 [2024-07-24 16:48:09.992537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:13.163 [2024-07-24 16:48:09.992633] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:13.163 [2024-07-24 16:48:09.992670] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:13.163 [2024-07-24 16:48:09.992849] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:31:13.163 [2024-07-24 16:48:09.992867] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:13.163 [2024-07-24 16:48:09.993172] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:31:13.163 [2024-07-24 16:48:09.993395] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:31:13.163 [2024-07-24 16:48:09.993409] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:31:13.163 [2024-07-24 16:48:09.993585] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:13.163 pt2 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.163 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.423 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.423 "name": "raid_bdev1", 00:31:13.423 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:13.423 "strip_size_kb": 0, 00:31:13.423 "state": "online", 00:31:13.423 "raid_level": "raid1", 00:31:13.423 "superblock": true, 00:31:13.423 "num_base_bdevs": 2, 00:31:13.423 "num_base_bdevs_discovered": 2, 00:31:13.423 "num_base_bdevs_operational": 2, 00:31:13.423 "base_bdevs_list": [ 00:31:13.423 { 00:31:13.423 "name": "pt1", 00:31:13.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:13.423 "is_configured": true, 00:31:13.423 "data_offset": 256, 00:31:13.423 "data_size": 7936 00:31:13.423 }, 00:31:13.423 { 00:31:13.423 "name": "pt2", 00:31:13.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:13.423 "is_configured": true, 00:31:13.423 "data_offset": 256, 00:31:13.423 "data_size": 7936 00:31:13.423 } 00:31:13.423 ] 00:31:13.423 }' 00:31:13.423 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.423 16:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:13.991 16:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:14.250 [2024-07-24 16:48:11.030922] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:14.250 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:14.250 "name": "raid_bdev1", 00:31:14.250 "aliases": [ 00:31:14.250 "77f421a1-6584-4a68-af61-a1b9b57305f7" 00:31:14.250 ], 00:31:14.250 "product_name": "Raid Volume", 00:31:14.250 "block_size": 4096, 00:31:14.250 "num_blocks": 7936, 00:31:14.250 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:14.250 "assigned_rate_limits": { 00:31:14.250 "rw_ios_per_sec": 0, 00:31:14.250 "rw_mbytes_per_sec": 0, 00:31:14.250 "r_mbytes_per_sec": 0, 00:31:14.250 "w_mbytes_per_sec": 0 00:31:14.250 }, 00:31:14.250 "claimed": false, 00:31:14.250 "zoned": false, 00:31:14.250 "supported_io_types": { 00:31:14.250 "read": true, 00:31:14.250 "write": true, 00:31:14.250 "unmap": false, 00:31:14.250 "flush": false, 00:31:14.250 "reset": true, 00:31:14.250 "nvme_admin": false, 00:31:14.250 "nvme_io": false, 00:31:14.250 "nvme_io_md": false, 00:31:14.250 "write_zeroes": true, 00:31:14.250 "zcopy": false, 00:31:14.250 "get_zone_info": false, 00:31:14.250 "zone_management": false, 00:31:14.250 "zone_append": false, 00:31:14.250 "compare": false, 00:31:14.250 "compare_and_write": false, 00:31:14.250 "abort": false, 00:31:14.250 "seek_hole": false, 00:31:14.250 "seek_data": false, 00:31:14.250 "copy": false, 00:31:14.250 "nvme_iov_md": false 00:31:14.250 }, 00:31:14.250 "memory_domains": [ 00:31:14.250 { 00:31:14.250 "dma_device_id": "system", 00:31:14.250 "dma_device_type": 1 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.250 "dma_device_type": 2 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "dma_device_id": "system", 00:31:14.250 "dma_device_type": 1 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.250 "dma_device_type": 2 00:31:14.250 } 00:31:14.250 ], 00:31:14.250 "driver_specific": { 00:31:14.250 "raid": { 00:31:14.250 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:14.250 "strip_size_kb": 0, 00:31:14.250 "state": "online", 00:31:14.250 "raid_level": "raid1", 00:31:14.250 "superblock": true, 00:31:14.250 "num_base_bdevs": 2, 00:31:14.250 "num_base_bdevs_discovered": 2, 00:31:14.250 "num_base_bdevs_operational": 2, 00:31:14.250 "base_bdevs_list": [ 00:31:14.250 { 00:31:14.250 "name": "pt1", 00:31:14.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:14.250 "is_configured": true, 00:31:14.250 "data_offset": 256, 00:31:14.250 "data_size": 7936 00:31:14.250 }, 00:31:14.250 { 00:31:14.250 "name": "pt2", 00:31:14.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:14.250 "is_configured": true, 00:31:14.250 "data_offset": 256, 00:31:14.250 "data_size": 7936 00:31:14.250 } 00:31:14.250 ] 00:31:14.250 } 00:31:14.250 } 00:31:14.250 }' 00:31:14.250 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:14.250 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:14.250 pt2' 00:31:14.250 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:14.250 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:14.250 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:14.509 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:14.509 "name": "pt1", 00:31:14.509 "aliases": [ 00:31:14.509 "00000000-0000-0000-0000-000000000001" 00:31:14.509 ], 00:31:14.509 "product_name": "passthru", 00:31:14.509 "block_size": 4096, 00:31:14.509 "num_blocks": 8192, 00:31:14.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:14.509 "assigned_rate_limits": { 00:31:14.509 "rw_ios_per_sec": 0, 00:31:14.509 "rw_mbytes_per_sec": 0, 00:31:14.509 "r_mbytes_per_sec": 0, 00:31:14.509 "w_mbytes_per_sec": 0 00:31:14.509 }, 00:31:14.509 "claimed": true, 00:31:14.509 "claim_type": "exclusive_write", 00:31:14.509 "zoned": false, 00:31:14.509 "supported_io_types": { 00:31:14.509 "read": true, 00:31:14.509 "write": true, 00:31:14.509 "unmap": true, 00:31:14.509 "flush": true, 00:31:14.509 "reset": true, 00:31:14.509 "nvme_admin": false, 00:31:14.509 "nvme_io": false, 00:31:14.509 "nvme_io_md": false, 00:31:14.509 "write_zeroes": true, 00:31:14.509 "zcopy": true, 00:31:14.509 "get_zone_info": false, 00:31:14.509 "zone_management": false, 00:31:14.509 "zone_append": false, 00:31:14.509 "compare": false, 00:31:14.509 "compare_and_write": false, 00:31:14.509 "abort": true, 00:31:14.509 "seek_hole": false, 00:31:14.509 "seek_data": false, 00:31:14.509 "copy": true, 00:31:14.509 "nvme_iov_md": false 00:31:14.509 }, 00:31:14.509 "memory_domains": [ 00:31:14.509 { 00:31:14.509 "dma_device_id": "system", 00:31:14.509 "dma_device_type": 1 00:31:14.509 }, 00:31:14.509 { 00:31:14.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.509 "dma_device_type": 2 00:31:14.509 } 00:31:14.509 ], 00:31:14.509 "driver_specific": { 00:31:14.509 "passthru": { 00:31:14.509 "name": "pt1", 00:31:14.509 "base_bdev_name": "malloc1" 00:31:14.509 } 00:31:14.509 } 00:31:14.509 }' 00:31:14.509 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:14.768 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:14.768 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:14.769 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.027 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.027 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:15.027 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:15.027 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:15.027 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.285 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.285 "name": "pt2", 00:31:15.285 "aliases": [ 00:31:15.285 "00000000-0000-0000-0000-000000000002" 00:31:15.285 ], 00:31:15.285 "product_name": "passthru", 00:31:15.285 "block_size": 4096, 00:31:15.285 "num_blocks": 8192, 00:31:15.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:15.285 "assigned_rate_limits": { 00:31:15.285 "rw_ios_per_sec": 0, 00:31:15.285 "rw_mbytes_per_sec": 0, 00:31:15.285 "r_mbytes_per_sec": 0, 00:31:15.285 "w_mbytes_per_sec": 0 00:31:15.285 }, 00:31:15.285 "claimed": true, 00:31:15.285 "claim_type": "exclusive_write", 00:31:15.285 "zoned": false, 00:31:15.285 "supported_io_types": { 00:31:15.285 "read": true, 00:31:15.285 "write": true, 00:31:15.285 "unmap": true, 00:31:15.285 "flush": true, 00:31:15.285 "reset": true, 00:31:15.285 "nvme_admin": false, 00:31:15.285 "nvme_io": false, 00:31:15.285 "nvme_io_md": false, 00:31:15.285 "write_zeroes": true, 00:31:15.285 "zcopy": true, 00:31:15.285 "get_zone_info": false, 00:31:15.285 "zone_management": false, 00:31:15.285 "zone_append": false, 00:31:15.285 "compare": false, 00:31:15.285 "compare_and_write": false, 00:31:15.285 "abort": true, 00:31:15.285 "seek_hole": false, 00:31:15.285 "seek_data": false, 00:31:15.285 "copy": true, 00:31:15.285 "nvme_iov_md": false 00:31:15.285 }, 00:31:15.285 "memory_domains": [ 00:31:15.285 { 00:31:15.285 "dma_device_id": "system", 00:31:15.285 "dma_device_type": 1 00:31:15.285 }, 00:31:15.285 { 00:31:15.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.286 "dma_device_type": 2 00:31:15.286 } 00:31:15.286 ], 00:31:15.286 "driver_specific": { 00:31:15.286 "passthru": { 00:31:15.286 "name": "pt2", 00:31:15.286 "base_bdev_name": "malloc2" 00:31:15.286 } 00:31:15.286 } 00:31:15.286 }' 00:31:15.286 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.286 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.286 16:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:15.286 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.286 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.286 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:15.286 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.286 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.544 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:15.544 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.544 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.544 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:15.544 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:15.544 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:31:15.802 [2024-07-24 16:48:12.438842] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:15.802 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 77f421a1-6584-4a68-af61-a1b9b57305f7 '!=' 77f421a1-6584-4a68-af61-a1b9b57305f7 ']' 00:31:15.802 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:31:15.802 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:15.802 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:15.802 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:15.802 [2024-07-24 16:48:12.663151] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.076 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.076 "name": "raid_bdev1", 00:31:16.076 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:16.076 "strip_size_kb": 0, 00:31:16.076 "state": "online", 00:31:16.076 "raid_level": "raid1", 00:31:16.076 "superblock": true, 00:31:16.076 "num_base_bdevs": 2, 00:31:16.076 "num_base_bdevs_discovered": 1, 00:31:16.076 "num_base_bdevs_operational": 1, 00:31:16.076 "base_bdevs_list": [ 00:31:16.076 { 00:31:16.076 "name": null, 00:31:16.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.076 "is_configured": false, 00:31:16.076 "data_offset": 256, 00:31:16.076 "data_size": 7936 00:31:16.076 }, 00:31:16.076 { 00:31:16.076 "name": "pt2", 00:31:16.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:16.076 "is_configured": true, 00:31:16.076 "data_offset": 256, 00:31:16.077 "data_size": 7936 00:31:16.077 } 00:31:16.077 ] 00:31:16.077 }' 00:31:16.077 16:48:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.077 16:48:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:16.678 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:16.938 [2024-07-24 16:48:13.689917] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:16.938 [2024-07-24 16:48:13.689950] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:16.938 [2024-07-24 16:48:13.690034] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:16.938 [2024-07-24 16:48:13.690092] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:16.938 [2024-07-24 16:48:13.690111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:31:16.938 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.938 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:31:17.197 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:31:17.197 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:31:17.197 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:31:17.197 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:17.197 16:48:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:17.457 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:17.457 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:17.457 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:31:17.457 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:31:17.457 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:31:17.457 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:17.717 [2024-07-24 16:48:14.371729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:17.717 [2024-07-24 16:48:14.371807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:17.717 [2024-07-24 16:48:14.371834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:31:17.717 [2024-07-24 16:48:14.371852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:17.717 [2024-07-24 16:48:14.374637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:17.717 [2024-07-24 16:48:14.374677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:17.717 [2024-07-24 16:48:14.374767] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:17.717 [2024-07-24 16:48:14.374831] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:17.717 [2024-07-24 16:48:14.375004] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:31:17.717 [2024-07-24 16:48:14.375022] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:17.717 [2024-07-24 16:48:14.375332] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:31:17.717 [2024-07-24 16:48:14.375558] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:31:17.717 [2024-07-24 16:48:14.375572] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:31:17.717 [2024-07-24 16:48:14.375784] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:17.717 pt2 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.717 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:17.977 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:17.977 "name": "raid_bdev1", 00:31:17.977 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:17.977 "strip_size_kb": 0, 00:31:17.977 "state": "online", 00:31:17.977 "raid_level": "raid1", 00:31:17.977 "superblock": true, 00:31:17.977 "num_base_bdevs": 2, 00:31:17.977 "num_base_bdevs_discovered": 1, 00:31:17.977 "num_base_bdevs_operational": 1, 00:31:17.977 "base_bdevs_list": [ 00:31:17.977 { 00:31:17.977 "name": null, 00:31:17.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.977 "is_configured": false, 00:31:17.977 "data_offset": 256, 00:31:17.977 "data_size": 7936 00:31:17.977 }, 00:31:17.977 { 00:31:17.977 "name": "pt2", 00:31:17.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:17.977 "is_configured": true, 00:31:17.977 "data_offset": 256, 00:31:17.977 "data_size": 7936 00:31:17.977 } 00:31:17.977 ] 00:31:17.977 }' 00:31:17.977 16:48:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:17.977 16:48:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:18.546 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:18.805 [2024-07-24 16:48:15.410519] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:18.805 [2024-07-24 16:48:15.410554] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:18.805 [2024-07-24 16:48:15.410634] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:18.805 [2024-07-24 16:48:15.410698] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:18.805 [2024-07-24 16:48:15.410714] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:31:18.805 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.805 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:31:18.805 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:31:18.805 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:31:18.805 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:31:18.805 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:19.064 [2024-07-24 16:48:15.863718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:19.064 [2024-07-24 16:48:15.863782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.064 [2024-07-24 16:48:15.863809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:31:19.064 [2024-07-24 16:48:15.863825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.064 [2024-07-24 16:48:15.866615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.064 [2024-07-24 16:48:15.866651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:19.064 [2024-07-24 16:48:15.866747] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:19.064 [2024-07-24 16:48:15.866839] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:19.064 [2024-07-24 16:48:15.867043] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:19.064 [2024-07-24 16:48:15.867061] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:19.064 [2024-07-24 16:48:15.867088] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:31:19.064 [2024-07-24 16:48:15.867185] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:19.064 [2024-07-24 16:48:15.867282] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:31:19.064 [2024-07-24 16:48:15.867296] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:19.064 [2024-07-24 16:48:15.867604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:31:19.064 [2024-07-24 16:48:15.867827] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:31:19.064 [2024-07-24 16:48:15.867844] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:31:19.064 [2024-07-24 16:48:15.868070] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.064 pt1 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.064 16:48:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.324 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.324 "name": "raid_bdev1", 00:31:19.324 "uuid": "77f421a1-6584-4a68-af61-a1b9b57305f7", 00:31:19.324 "strip_size_kb": 0, 00:31:19.324 "state": "online", 00:31:19.324 "raid_level": "raid1", 00:31:19.324 "superblock": true, 00:31:19.324 "num_base_bdevs": 2, 00:31:19.324 "num_base_bdevs_discovered": 1, 00:31:19.324 "num_base_bdevs_operational": 1, 00:31:19.324 "base_bdevs_list": [ 00:31:19.324 { 00:31:19.324 "name": null, 00:31:19.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.324 "is_configured": false, 00:31:19.324 "data_offset": 256, 00:31:19.324 "data_size": 7936 00:31:19.324 }, 00:31:19.324 { 00:31:19.324 "name": "pt2", 00:31:19.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:19.324 "is_configured": true, 00:31:19.324 "data_offset": 256, 00:31:19.324 "data_size": 7936 00:31:19.324 } 00:31:19.324 ] 00:31:19.324 }' 00:31:19.324 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.324 16:48:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:19.892 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:19.892 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:20.152 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:31:20.152 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:20.152 16:48:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:31:20.412 [2024-07-24 16:48:17.143737] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 77f421a1-6584-4a68-af61-a1b9b57305f7 '!=' 77f421a1-6584-4a68-af61-a1b9b57305f7 ']' 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 1784006 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 1784006 ']' 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 1784006 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1784006 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1784006' 00:31:20.412 killing process with pid 1784006 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 1784006 00:31:20.412 [2024-07-24 16:48:17.231559] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:20.412 [2024-07-24 16:48:17.231663] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:20.412 [2024-07-24 16:48:17.231728] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:20.412 16:48:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 1784006 00:31:20.412 [2024-07-24 16:48:17.231747] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:31:20.671 [2024-07-24 16:48:17.422667] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:22.577 16:48:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:31:22.577 00:31:22.577 real 0m16.275s 00:31:22.577 user 0m27.800s 00:31:22.577 sys 0m2.851s 00:31:22.577 16:48:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:22.577 16:48:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:22.577 ************************************ 00:31:22.577 END TEST raid_superblock_test_4k 00:31:22.577 ************************************ 00:31:22.578 16:48:19 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:31:22.578 16:48:19 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:31:22.578 16:48:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:22.578 16:48:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:22.578 16:48:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:22.578 ************************************ 00:31:22.578 START TEST raid_rebuild_test_sb_4k 00:31:22.578 ************************************ 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=1787398 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 1787398 /var/tmp/spdk-raid.sock 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 1787398 ']' 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:22.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:22.578 16:48:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:22.578 [2024-07-24 16:48:19.278278] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:31:22.578 [2024-07-24 16:48:19.278404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1787398 ] 00:31:22.578 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:22.578 Zero copy mechanism will not be used. 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:22.578 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:22.578 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:22.837 [2024-07-24 16:48:19.504251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:23.095 [2024-07-24 16:48:19.781073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.354 [2024-07-24 16:48:20.105464] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:23.354 [2024-07-24 16:48:20.105505] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:23.613 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:23.613 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:31:23.613 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:23.613 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:31:23.871 BaseBdev1_malloc 00:31:23.872 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:24.130 [2024-07-24 16:48:20.787252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:24.130 [2024-07-24 16:48:20.787317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.130 [2024-07-24 16:48:20.787346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:31:24.130 [2024-07-24 16:48:20.787365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.130 [2024-07-24 16:48:20.790116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.130 [2024-07-24 16:48:20.790163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:24.130 BaseBdev1 00:31:24.130 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:24.130 16:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:31:24.390 BaseBdev2_malloc 00:31:24.390 16:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:24.649 [2024-07-24 16:48:21.296504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:24.649 [2024-07-24 16:48:21.296570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.649 [2024-07-24 16:48:21.296598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:31:24.649 [2024-07-24 16:48:21.296619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.649 [2024-07-24 16:48:21.299430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.649 [2024-07-24 16:48:21.299469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:24.649 BaseBdev2 00:31:24.649 16:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:31:24.909 spare_malloc 00:31:24.909 16:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:25.168 spare_delay 00:31:25.168 16:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:25.168 [2024-07-24 16:48:22.012016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:25.168 [2024-07-24 16:48:22.012074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:25.168 [2024-07-24 16:48:22.012101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:31:25.168 [2024-07-24 16:48:22.012120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:25.168 [2024-07-24 16:48:22.014893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:25.168 [2024-07-24 16:48:22.014930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:25.168 spare 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:25.428 [2024-07-24 16:48:22.240654] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:25.428 [2024-07-24 16:48:22.242964] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:25.428 [2024-07-24 16:48:22.243208] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:31:25.428 [2024-07-24 16:48:22.243234] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:25.428 [2024-07-24 16:48:22.243584] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:31:25.428 [2024-07-24 16:48:22.243828] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:31:25.428 [2024-07-24 16:48:22.243843] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:31:25.428 [2024-07-24 16:48:22.244039] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.428 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.688 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.688 "name": "raid_bdev1", 00:31:25.688 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:25.688 "strip_size_kb": 0, 00:31:25.688 "state": "online", 00:31:25.688 "raid_level": "raid1", 00:31:25.688 "superblock": true, 00:31:25.688 "num_base_bdevs": 2, 00:31:25.688 "num_base_bdevs_discovered": 2, 00:31:25.688 "num_base_bdevs_operational": 2, 00:31:25.688 "base_bdevs_list": [ 00:31:25.688 { 00:31:25.688 "name": "BaseBdev1", 00:31:25.688 "uuid": "620269d9-8954-5547-b8e3-a39670815e0e", 00:31:25.688 "is_configured": true, 00:31:25.688 "data_offset": 256, 00:31:25.688 "data_size": 7936 00:31:25.688 }, 00:31:25.688 { 00:31:25.688 "name": "BaseBdev2", 00:31:25.688 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:25.688 "is_configured": true, 00:31:25.688 "data_offset": 256, 00:31:25.688 "data_size": 7936 00:31:25.688 } 00:31:25.688 ] 00:31:25.688 }' 00:31:25.688 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.688 16:48:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:26.354 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:26.354 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:31:26.613 [2024-07-24 16:48:23.291819] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:26.613 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:31:26.613 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.613 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:26.871 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:27.129 [2024-07-24 16:48:23.748765] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:31:27.129 /dev/nbd0 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:27.129 1+0 records in 00:31:27.129 1+0 records out 00:31:27.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274043 s, 14.9 MB/s 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:31:27.129 16:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:31:28.065 7936+0 records in 00:31:28.065 7936+0 records out 00:31:28.065 32505856 bytes (33 MB, 31 MiB) copied, 0.810988 s, 40.1 MB/s 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:28.065 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:28.066 [2024-07-24 16:48:24.868304] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.066 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:28.066 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:28.066 16:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:28.324 [2024-07-24 16:48:25.080983] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:28.324 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.325 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.325 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.325 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.325 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.325 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.584 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.584 "name": "raid_bdev1", 00:31:28.584 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:28.584 "strip_size_kb": 0, 00:31:28.584 "state": "online", 00:31:28.584 "raid_level": "raid1", 00:31:28.584 "superblock": true, 00:31:28.584 "num_base_bdevs": 2, 00:31:28.584 "num_base_bdevs_discovered": 1, 00:31:28.584 "num_base_bdevs_operational": 1, 00:31:28.584 "base_bdevs_list": [ 00:31:28.584 { 00:31:28.584 "name": null, 00:31:28.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.584 "is_configured": false, 00:31:28.584 "data_offset": 256, 00:31:28.584 "data_size": 7936 00:31:28.584 }, 00:31:28.584 { 00:31:28.584 "name": "BaseBdev2", 00:31:28.584 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:28.584 "is_configured": true, 00:31:28.584 "data_offset": 256, 00:31:28.584 "data_size": 7936 00:31:28.584 } 00:31:28.584 ] 00:31:28.584 }' 00:31:28.584 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.584 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:29.151 16:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:29.409 [2024-07-24 16:48:26.127825] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:29.409 [2024-07-24 16:48:26.155054] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:31:29.409 [2024-07-24 16:48:26.157391] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:29.409 16:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.344 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:30.603 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:30.603 "name": "raid_bdev1", 00:31:30.603 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:30.603 "strip_size_kb": 0, 00:31:30.603 "state": "online", 00:31:30.603 "raid_level": "raid1", 00:31:30.603 "superblock": true, 00:31:30.603 "num_base_bdevs": 2, 00:31:30.603 "num_base_bdevs_discovered": 2, 00:31:30.603 "num_base_bdevs_operational": 2, 00:31:30.603 "process": { 00:31:30.603 "type": "rebuild", 00:31:30.603 "target": "spare", 00:31:30.603 "progress": { 00:31:30.603 "blocks": 3072, 00:31:30.603 "percent": 38 00:31:30.603 } 00:31:30.603 }, 00:31:30.603 "base_bdevs_list": [ 00:31:30.603 { 00:31:30.603 "name": "spare", 00:31:30.603 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:30.603 "is_configured": true, 00:31:30.603 "data_offset": 256, 00:31:30.603 "data_size": 7936 00:31:30.603 }, 00:31:30.603 { 00:31:30.603 "name": "BaseBdev2", 00:31:30.603 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:30.603 "is_configured": true, 00:31:30.603 "data_offset": 256, 00:31:30.603 "data_size": 7936 00:31:30.603 } 00:31:30.603 ] 00:31:30.603 }' 00:31:30.603 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:30.603 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:30.603 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:30.862 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:30.862 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:30.862 [2024-07-24 16:48:27.699251] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:31.121 [2024-07-24 16:48:27.770358] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:31.121 [2024-07-24 16:48:27.770423] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:31.121 [2024-07-24 16:48:27.770444] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:31.121 [2024-07-24 16:48:27.770467] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.121 16:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.380 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:31.380 "name": "raid_bdev1", 00:31:31.380 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:31.380 "strip_size_kb": 0, 00:31:31.380 "state": "online", 00:31:31.380 "raid_level": "raid1", 00:31:31.380 "superblock": true, 00:31:31.380 "num_base_bdevs": 2, 00:31:31.380 "num_base_bdevs_discovered": 1, 00:31:31.380 "num_base_bdevs_operational": 1, 00:31:31.380 "base_bdevs_list": [ 00:31:31.380 { 00:31:31.380 "name": null, 00:31:31.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.380 "is_configured": false, 00:31:31.380 "data_offset": 256, 00:31:31.380 "data_size": 7936 00:31:31.380 }, 00:31:31.380 { 00:31:31.380 "name": "BaseBdev2", 00:31:31.380 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:31.380 "is_configured": true, 00:31:31.380 "data_offset": 256, 00:31:31.380 "data_size": 7936 00:31:31.380 } 00:31:31.380 ] 00:31:31.380 }' 00:31:31.380 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:31.380 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.947 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.206 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:32.206 "name": "raid_bdev1", 00:31:32.206 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:32.206 "strip_size_kb": 0, 00:31:32.206 "state": "online", 00:31:32.206 "raid_level": "raid1", 00:31:32.206 "superblock": true, 00:31:32.206 "num_base_bdevs": 2, 00:31:32.206 "num_base_bdevs_discovered": 1, 00:31:32.206 "num_base_bdevs_operational": 1, 00:31:32.206 "base_bdevs_list": [ 00:31:32.206 { 00:31:32.206 "name": null, 00:31:32.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.206 "is_configured": false, 00:31:32.206 "data_offset": 256, 00:31:32.206 "data_size": 7936 00:31:32.206 }, 00:31:32.206 { 00:31:32.206 "name": "BaseBdev2", 00:31:32.206 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:32.206 "is_configured": true, 00:31:32.206 "data_offset": 256, 00:31:32.206 "data_size": 7936 00:31:32.206 } 00:31:32.206 ] 00:31:32.206 }' 00:31:32.206 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:32.206 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:32.206 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:32.206 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:32.206 16:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:32.465 [2024-07-24 16:48:29.165641] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:32.465 [2024-07-24 16:48:29.191243] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:31:32.465 [2024-07-24 16:48:29.193574] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:32.465 16:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.400 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.659 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:33.659 "name": "raid_bdev1", 00:31:33.659 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:33.659 "strip_size_kb": 0, 00:31:33.659 "state": "online", 00:31:33.659 "raid_level": "raid1", 00:31:33.659 "superblock": true, 00:31:33.659 "num_base_bdevs": 2, 00:31:33.659 "num_base_bdevs_discovered": 2, 00:31:33.659 "num_base_bdevs_operational": 2, 00:31:33.659 "process": { 00:31:33.659 "type": "rebuild", 00:31:33.659 "target": "spare", 00:31:33.659 "progress": { 00:31:33.659 "blocks": 3072, 00:31:33.659 "percent": 38 00:31:33.659 } 00:31:33.659 }, 00:31:33.659 "base_bdevs_list": [ 00:31:33.659 { 00:31:33.659 "name": "spare", 00:31:33.659 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:33.659 "is_configured": true, 00:31:33.659 "data_offset": 256, 00:31:33.659 "data_size": 7936 00:31:33.659 }, 00:31:33.659 { 00:31:33.659 "name": "BaseBdev2", 00:31:33.659 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:33.659 "is_configured": true, 00:31:33.659 "data_offset": 256, 00:31:33.659 "data_size": 7936 00:31:33.659 } 00:31:33.659 ] 00:31:33.659 }' 00:31:33.659 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:33.659 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:33.660 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:31:33.918 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1114 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:33.918 "name": "raid_bdev1", 00:31:33.918 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:33.918 "strip_size_kb": 0, 00:31:33.918 "state": "online", 00:31:33.918 "raid_level": "raid1", 00:31:33.918 "superblock": true, 00:31:33.918 "num_base_bdevs": 2, 00:31:33.918 "num_base_bdevs_discovered": 2, 00:31:33.918 "num_base_bdevs_operational": 2, 00:31:33.918 "process": { 00:31:33.918 "type": "rebuild", 00:31:33.918 "target": "spare", 00:31:33.918 "progress": { 00:31:33.918 "blocks": 3840, 00:31:33.918 "percent": 48 00:31:33.918 } 00:31:33.918 }, 00:31:33.918 "base_bdevs_list": [ 00:31:33.918 { 00:31:33.918 "name": "spare", 00:31:33.918 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:33.918 "is_configured": true, 00:31:33.918 "data_offset": 256, 00:31:33.918 "data_size": 7936 00:31:33.918 }, 00:31:33.918 { 00:31:33.918 "name": "BaseBdev2", 00:31:33.918 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:33.918 "is_configured": true, 00:31:33.918 "data_offset": 256, 00:31:33.918 "data_size": 7936 00:31:33.918 } 00:31:33.918 ] 00:31:33.918 }' 00:31:33.918 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:34.177 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:34.177 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:34.177 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:34.177 16:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.110 16:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.367 16:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:35.367 "name": "raid_bdev1", 00:31:35.367 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:35.367 "strip_size_kb": 0, 00:31:35.367 "state": "online", 00:31:35.367 "raid_level": "raid1", 00:31:35.367 "superblock": true, 00:31:35.367 "num_base_bdevs": 2, 00:31:35.367 "num_base_bdevs_discovered": 2, 00:31:35.367 "num_base_bdevs_operational": 2, 00:31:35.367 "process": { 00:31:35.367 "type": "rebuild", 00:31:35.367 "target": "spare", 00:31:35.367 "progress": { 00:31:35.367 "blocks": 7168, 00:31:35.367 "percent": 90 00:31:35.367 } 00:31:35.367 }, 00:31:35.367 "base_bdevs_list": [ 00:31:35.367 { 00:31:35.367 "name": "spare", 00:31:35.367 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:35.367 "is_configured": true, 00:31:35.367 "data_offset": 256, 00:31:35.367 "data_size": 7936 00:31:35.367 }, 00:31:35.367 { 00:31:35.367 "name": "BaseBdev2", 00:31:35.367 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:35.367 "is_configured": true, 00:31:35.367 "data_offset": 256, 00:31:35.367 "data_size": 7936 00:31:35.367 } 00:31:35.367 ] 00:31:35.367 }' 00:31:35.367 16:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:35.367 16:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:35.367 16:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:35.367 16:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:35.367 16:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:35.625 [2024-07-24 16:48:32.318537] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:35.625 [2024-07-24 16:48:32.318611] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:35.625 [2024-07-24 16:48:32.318714] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.560 "name": "raid_bdev1", 00:31:36.560 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:36.560 "strip_size_kb": 0, 00:31:36.560 "state": "online", 00:31:36.560 "raid_level": "raid1", 00:31:36.560 "superblock": true, 00:31:36.560 "num_base_bdevs": 2, 00:31:36.560 "num_base_bdevs_discovered": 2, 00:31:36.560 "num_base_bdevs_operational": 2, 00:31:36.560 "base_bdevs_list": [ 00:31:36.560 { 00:31:36.560 "name": "spare", 00:31:36.560 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:36.560 "is_configured": true, 00:31:36.560 "data_offset": 256, 00:31:36.560 "data_size": 7936 00:31:36.560 }, 00:31:36.560 { 00:31:36.560 "name": "BaseBdev2", 00:31:36.560 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:36.560 "is_configured": true, 00:31:36.560 "data_offset": 256, 00:31:36.560 "data_size": 7936 00:31:36.560 } 00:31:36.560 ] 00:31:36.560 }' 00:31:36.560 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.818 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:37.076 "name": "raid_bdev1", 00:31:37.076 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:37.076 "strip_size_kb": 0, 00:31:37.076 "state": "online", 00:31:37.076 "raid_level": "raid1", 00:31:37.076 "superblock": true, 00:31:37.076 "num_base_bdevs": 2, 00:31:37.076 "num_base_bdevs_discovered": 2, 00:31:37.076 "num_base_bdevs_operational": 2, 00:31:37.076 "base_bdevs_list": [ 00:31:37.076 { 00:31:37.076 "name": "spare", 00:31:37.076 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:37.076 "is_configured": true, 00:31:37.076 "data_offset": 256, 00:31:37.076 "data_size": 7936 00:31:37.076 }, 00:31:37.076 { 00:31:37.076 "name": "BaseBdev2", 00:31:37.076 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:37.076 "is_configured": true, 00:31:37.076 "data_offset": 256, 00:31:37.076 "data_size": 7936 00:31:37.076 } 00:31:37.076 ] 00:31:37.076 }' 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.076 16:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:37.334 16:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.334 "name": "raid_bdev1", 00:31:37.334 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:37.334 "strip_size_kb": 0, 00:31:37.334 "state": "online", 00:31:37.334 "raid_level": "raid1", 00:31:37.335 "superblock": true, 00:31:37.335 "num_base_bdevs": 2, 00:31:37.335 "num_base_bdevs_discovered": 2, 00:31:37.335 "num_base_bdevs_operational": 2, 00:31:37.335 "base_bdevs_list": [ 00:31:37.335 { 00:31:37.335 "name": "spare", 00:31:37.335 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:37.335 "is_configured": true, 00:31:37.335 "data_offset": 256, 00:31:37.335 "data_size": 7936 00:31:37.335 }, 00:31:37.335 { 00:31:37.335 "name": "BaseBdev2", 00:31:37.335 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:37.335 "is_configured": true, 00:31:37.335 "data_offset": 256, 00:31:37.335 "data_size": 7936 00:31:37.335 } 00:31:37.335 ] 00:31:37.335 }' 00:31:37.335 16:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.335 16:48:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:37.901 16:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:38.160 [2024-07-24 16:48:34.790877] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:38.160 [2024-07-24 16:48:34.790914] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:38.160 [2024-07-24 16:48:34.791002] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:38.160 [2024-07-24 16:48:34.791080] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:38.160 [2024-07-24 16:48:34.791097] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:31:38.160 16:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:31:38.160 16:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:38.418 /dev/nbd0 00:31:38.418 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:38.677 1+0 records in 00:31:38.677 1+0 records out 00:31:38.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234065 s, 17.5 MB/s 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:38.677 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:38.677 /dev/nbd1 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:38.935 1+0 records in 00:31:38.935 1+0 records out 00:31:38.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003003 s, 13.6 MB/s 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:38.935 16:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:39.193 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:31:39.452 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:39.710 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:39.969 [2024-07-24 16:48:36.741236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:39.969 [2024-07-24 16:48:36.741292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.969 [2024-07-24 16:48:36.741322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:31:39.969 [2024-07-24 16:48:36.741339] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.969 [2024-07-24 16:48:36.744126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.969 [2024-07-24 16:48:36.744171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:39.969 [2024-07-24 16:48:36.744271] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:39.969 [2024-07-24 16:48:36.744342] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:39.969 [2024-07-24 16:48:36.744558] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:39.969 spare 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.969 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.228 [2024-07-24 16:48:36.844902] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:31:40.228 [2024-07-24 16:48:36.844933] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:40.228 [2024-07-24 16:48:36.845276] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:31:40.228 [2024-07-24 16:48:36.845543] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:31:40.228 [2024-07-24 16:48:36.845559] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:31:40.228 [2024-07-24 16:48:36.845760] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:40.228 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:40.228 "name": "raid_bdev1", 00:31:40.228 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:40.228 "strip_size_kb": 0, 00:31:40.228 "state": "online", 00:31:40.228 "raid_level": "raid1", 00:31:40.228 "superblock": true, 00:31:40.228 "num_base_bdevs": 2, 00:31:40.228 "num_base_bdevs_discovered": 2, 00:31:40.228 "num_base_bdevs_operational": 2, 00:31:40.228 "base_bdevs_list": [ 00:31:40.228 { 00:31:40.228 "name": "spare", 00:31:40.228 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:40.228 "is_configured": true, 00:31:40.228 "data_offset": 256, 00:31:40.228 "data_size": 7936 00:31:40.228 }, 00:31:40.228 { 00:31:40.228 "name": "BaseBdev2", 00:31:40.228 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:40.228 "is_configured": true, 00:31:40.228 "data_offset": 256, 00:31:40.228 "data_size": 7936 00:31:40.228 } 00:31:40.228 ] 00:31:40.228 }' 00:31:40.228 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:40.228 16:48:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.795 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:41.054 "name": "raid_bdev1", 00:31:41.054 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:41.054 "strip_size_kb": 0, 00:31:41.054 "state": "online", 00:31:41.054 "raid_level": "raid1", 00:31:41.054 "superblock": true, 00:31:41.054 "num_base_bdevs": 2, 00:31:41.054 "num_base_bdevs_discovered": 2, 00:31:41.054 "num_base_bdevs_operational": 2, 00:31:41.054 "base_bdevs_list": [ 00:31:41.054 { 00:31:41.054 "name": "spare", 00:31:41.054 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:41.054 "is_configured": true, 00:31:41.054 "data_offset": 256, 00:31:41.054 "data_size": 7936 00:31:41.054 }, 00:31:41.054 { 00:31:41.054 "name": "BaseBdev2", 00:31:41.054 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:41.054 "is_configured": true, 00:31:41.054 "data_offset": 256, 00:31:41.054 "data_size": 7936 00:31:41.054 } 00:31:41.054 ] 00:31:41.054 }' 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.054 16:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:41.313 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.313 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:41.571 [2024-07-24 16:48:38.309758] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:41.571 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.572 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.830 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:41.830 "name": "raid_bdev1", 00:31:41.830 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:41.830 "strip_size_kb": 0, 00:31:41.830 "state": "online", 00:31:41.830 "raid_level": "raid1", 00:31:41.830 "superblock": true, 00:31:41.830 "num_base_bdevs": 2, 00:31:41.830 "num_base_bdevs_discovered": 1, 00:31:41.830 "num_base_bdevs_operational": 1, 00:31:41.830 "base_bdevs_list": [ 00:31:41.830 { 00:31:41.830 "name": null, 00:31:41.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.830 "is_configured": false, 00:31:41.830 "data_offset": 256, 00:31:41.830 "data_size": 7936 00:31:41.830 }, 00:31:41.830 { 00:31:41.830 "name": "BaseBdev2", 00:31:41.830 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:41.830 "is_configured": true, 00:31:41.830 "data_offset": 256, 00:31:41.830 "data_size": 7936 00:31:41.830 } 00:31:41.830 ] 00:31:41.830 }' 00:31:41.830 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:41.830 16:48:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:42.397 16:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:42.656 [2024-07-24 16:48:39.320488] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:42.656 [2024-07-24 16:48:39.320698] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:42.656 [2024-07-24 16:48:39.320724] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:42.656 [2024-07-24 16:48:39.320761] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:42.656 [2024-07-24 16:48:39.345794] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:31:42.656 [2024-07-24 16:48:39.348073] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:42.656 16:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.592 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.850 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:43.850 "name": "raid_bdev1", 00:31:43.850 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:43.850 "strip_size_kb": 0, 00:31:43.850 "state": "online", 00:31:43.850 "raid_level": "raid1", 00:31:43.850 "superblock": true, 00:31:43.850 "num_base_bdevs": 2, 00:31:43.850 "num_base_bdevs_discovered": 2, 00:31:43.850 "num_base_bdevs_operational": 2, 00:31:43.850 "process": { 00:31:43.850 "type": "rebuild", 00:31:43.850 "target": "spare", 00:31:43.850 "progress": { 00:31:43.850 "blocks": 3072, 00:31:43.850 "percent": 38 00:31:43.850 } 00:31:43.850 }, 00:31:43.850 "base_bdevs_list": [ 00:31:43.850 { 00:31:43.850 "name": "spare", 00:31:43.850 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:43.850 "is_configured": true, 00:31:43.850 "data_offset": 256, 00:31:43.850 "data_size": 7936 00:31:43.850 }, 00:31:43.850 { 00:31:43.850 "name": "BaseBdev2", 00:31:43.850 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:43.850 "is_configured": true, 00:31:43.850 "data_offset": 256, 00:31:43.850 "data_size": 7936 00:31:43.850 } 00:31:43.850 ] 00:31:43.850 }' 00:31:43.850 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:43.850 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.850 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:43.850 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.850 16:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:44.108 [2024-07-24 16:48:40.880983] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:44.108 [2024-07-24 16:48:40.961020] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:44.108 [2024-07-24 16:48:40.961100] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.108 [2024-07-24 16:48:40.961122] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:44.108 [2024-07-24 16:48:40.961136] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.367 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.658 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:44.658 "name": "raid_bdev1", 00:31:44.659 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:44.659 "strip_size_kb": 0, 00:31:44.659 "state": "online", 00:31:44.659 "raid_level": "raid1", 00:31:44.659 "superblock": true, 00:31:44.659 "num_base_bdevs": 2, 00:31:44.659 "num_base_bdevs_discovered": 1, 00:31:44.659 "num_base_bdevs_operational": 1, 00:31:44.659 "base_bdevs_list": [ 00:31:44.659 { 00:31:44.659 "name": null, 00:31:44.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.659 "is_configured": false, 00:31:44.659 "data_offset": 256, 00:31:44.659 "data_size": 7936 00:31:44.659 }, 00:31:44.659 { 00:31:44.659 "name": "BaseBdev2", 00:31:44.659 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:44.659 "is_configured": true, 00:31:44.659 "data_offset": 256, 00:31:44.659 "data_size": 7936 00:31:44.659 } 00:31:44.659 ] 00:31:44.659 }' 00:31:44.659 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:44.659 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:45.227 16:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:45.227 [2024-07-24 16:48:41.981818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:45.227 [2024-07-24 16:48:41.981889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:45.227 [2024-07-24 16:48:41.981917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:31:45.227 [2024-07-24 16:48:41.981935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:45.227 [2024-07-24 16:48:41.982572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:45.227 [2024-07-24 16:48:41.982606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:45.227 [2024-07-24 16:48:41.982714] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:45.227 [2024-07-24 16:48:41.982734] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:45.227 [2024-07-24 16:48:41.982750] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:45.227 [2024-07-24 16:48:41.982786] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:45.227 [2024-07-24 16:48:42.005367] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:31:45.227 spare 00:31:45.227 [2024-07-24 16:48:42.007671] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:45.227 16:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:46.611 "name": "raid_bdev1", 00:31:46.611 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:46.611 "strip_size_kb": 0, 00:31:46.611 "state": "online", 00:31:46.611 "raid_level": "raid1", 00:31:46.611 "superblock": true, 00:31:46.611 "num_base_bdevs": 2, 00:31:46.611 "num_base_bdevs_discovered": 2, 00:31:46.611 "num_base_bdevs_operational": 2, 00:31:46.611 "process": { 00:31:46.611 "type": "rebuild", 00:31:46.611 "target": "spare", 00:31:46.611 "progress": { 00:31:46.611 "blocks": 2816, 00:31:46.611 "percent": 35 00:31:46.611 } 00:31:46.611 }, 00:31:46.611 "base_bdevs_list": [ 00:31:46.611 { 00:31:46.611 "name": "spare", 00:31:46.611 "uuid": "416be27e-c468-5e9b-8af0-48ae51133e7b", 00:31:46.611 "is_configured": true, 00:31:46.611 "data_offset": 256, 00:31:46.611 "data_size": 7936 00:31:46.611 }, 00:31:46.611 { 00:31:46.611 "name": "BaseBdev2", 00:31:46.611 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:46.611 "is_configured": true, 00:31:46.611 "data_offset": 256, 00:31:46.611 "data_size": 7936 00:31:46.611 } 00:31:46.611 ] 00:31:46.611 }' 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:46.611 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:46.872 [2024-07-24 16:48:43.500909] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:46.872 [2024-07-24 16:48:43.519853] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:46.872 [2024-07-24 16:48:43.519914] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:46.872 [2024-07-24 16:48:43.519938] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:46.872 [2024-07-24 16:48:43.519949] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.872 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.131 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:47.131 "name": "raid_bdev1", 00:31:47.131 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:47.131 "strip_size_kb": 0, 00:31:47.131 "state": "online", 00:31:47.131 "raid_level": "raid1", 00:31:47.131 "superblock": true, 00:31:47.131 "num_base_bdevs": 2, 00:31:47.131 "num_base_bdevs_discovered": 1, 00:31:47.131 "num_base_bdevs_operational": 1, 00:31:47.131 "base_bdevs_list": [ 00:31:47.131 { 00:31:47.131 "name": null, 00:31:47.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.131 "is_configured": false, 00:31:47.131 "data_offset": 256, 00:31:47.131 "data_size": 7936 00:31:47.131 }, 00:31:47.131 { 00:31:47.131 "name": "BaseBdev2", 00:31:47.131 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:47.131 "is_configured": true, 00:31:47.131 "data_offset": 256, 00:31:47.131 "data_size": 7936 00:31:47.131 } 00:31:47.131 ] 00:31:47.131 }' 00:31:47.131 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:47.131 16:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.698 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.957 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:47.957 "name": "raid_bdev1", 00:31:47.957 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:47.957 "strip_size_kb": 0, 00:31:47.957 "state": "online", 00:31:47.957 "raid_level": "raid1", 00:31:47.957 "superblock": true, 00:31:47.957 "num_base_bdevs": 2, 00:31:47.957 "num_base_bdevs_discovered": 1, 00:31:47.957 "num_base_bdevs_operational": 1, 00:31:47.957 "base_bdevs_list": [ 00:31:47.957 { 00:31:47.957 "name": null, 00:31:47.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.957 "is_configured": false, 00:31:47.957 "data_offset": 256, 00:31:47.957 "data_size": 7936 00:31:47.957 }, 00:31:47.957 { 00:31:47.957 "name": "BaseBdev2", 00:31:47.957 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:47.957 "is_configured": true, 00:31:47.957 "data_offset": 256, 00:31:47.957 "data_size": 7936 00:31:47.957 } 00:31:47.957 ] 00:31:47.957 }' 00:31:47.957 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:47.957 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:47.957 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:47.957 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:47.957 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:48.216 16:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:48.474 [2024-07-24 16:48:45.118242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:48.474 [2024-07-24 16:48:45.118305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:48.474 [2024-07-24 16:48:45.118335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:31:48.474 [2024-07-24 16:48:45.118351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:48.474 [2024-07-24 16:48:45.118936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:48.474 [2024-07-24 16:48:45.118963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:48.474 [2024-07-24 16:48:45.119063] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:48.474 [2024-07-24 16:48:45.119081] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:48.474 [2024-07-24 16:48:45.119097] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:48.474 BaseBdev1 00:31:48.474 16:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.410 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:49.669 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.669 "name": "raid_bdev1", 00:31:49.669 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:49.669 "strip_size_kb": 0, 00:31:49.669 "state": "online", 00:31:49.669 "raid_level": "raid1", 00:31:49.669 "superblock": true, 00:31:49.669 "num_base_bdevs": 2, 00:31:49.669 "num_base_bdevs_discovered": 1, 00:31:49.669 "num_base_bdevs_operational": 1, 00:31:49.669 "base_bdevs_list": [ 00:31:49.669 { 00:31:49.669 "name": null, 00:31:49.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:49.669 "is_configured": false, 00:31:49.669 "data_offset": 256, 00:31:49.669 "data_size": 7936 00:31:49.669 }, 00:31:49.669 { 00:31:49.669 "name": "BaseBdev2", 00:31:49.669 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:49.669 "is_configured": true, 00:31:49.669 "data_offset": 256, 00:31:49.669 "data_size": 7936 00:31:49.669 } 00:31:49.669 ] 00:31:49.669 }' 00:31:49.669 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.669 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.234 16:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.493 "name": "raid_bdev1", 00:31:50.493 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:50.493 "strip_size_kb": 0, 00:31:50.493 "state": "online", 00:31:50.493 "raid_level": "raid1", 00:31:50.493 "superblock": true, 00:31:50.493 "num_base_bdevs": 2, 00:31:50.493 "num_base_bdevs_discovered": 1, 00:31:50.493 "num_base_bdevs_operational": 1, 00:31:50.493 "base_bdevs_list": [ 00:31:50.493 { 00:31:50.493 "name": null, 00:31:50.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.493 "is_configured": false, 00:31:50.493 "data_offset": 256, 00:31:50.493 "data_size": 7936 00:31:50.493 }, 00:31:50.493 { 00:31:50.493 "name": "BaseBdev2", 00:31:50.493 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:50.493 "is_configured": true, 00:31:50.493 "data_offset": 256, 00:31:50.493 "data_size": 7936 00:31:50.493 } 00:31:50.493 ] 00:31:50.493 }' 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:50.493 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:50.751 [2024-07-24 16:48:47.424485] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:50.751 [2024-07-24 16:48:47.424670] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:50.751 [2024-07-24 16:48:47.424690] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:50.751 request: 00:31:50.751 { 00:31:50.751 "base_bdev": "BaseBdev1", 00:31:50.751 "raid_bdev": "raid_bdev1", 00:31:50.751 "method": "bdev_raid_add_base_bdev", 00:31:50.751 "req_id": 1 00:31:50.751 } 00:31:50.751 Got JSON-RPC error response 00:31:50.751 response: 00:31:50.751 { 00:31:50.751 "code": -22, 00:31:50.751 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:50.751 } 00:31:50.751 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:31:50.751 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:50.751 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:50.751 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:50.751 16:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.684 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.685 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.685 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.942 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.942 "name": "raid_bdev1", 00:31:51.942 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:51.942 "strip_size_kb": 0, 00:31:51.942 "state": "online", 00:31:51.942 "raid_level": "raid1", 00:31:51.942 "superblock": true, 00:31:51.942 "num_base_bdevs": 2, 00:31:51.942 "num_base_bdevs_discovered": 1, 00:31:51.942 "num_base_bdevs_operational": 1, 00:31:51.942 "base_bdevs_list": [ 00:31:51.942 { 00:31:51.942 "name": null, 00:31:51.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.942 "is_configured": false, 00:31:51.942 "data_offset": 256, 00:31:51.942 "data_size": 7936 00:31:51.942 }, 00:31:51.942 { 00:31:51.942 "name": "BaseBdev2", 00:31:51.942 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:51.942 "is_configured": true, 00:31:51.942 "data_offset": 256, 00:31:51.942 "data_size": 7936 00:31:51.942 } 00:31:51.942 ] 00:31:51.942 }' 00:31:51.942 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.943 16:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.508 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:52.509 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:52.509 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:52.509 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:52.509 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:52.509 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.509 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:52.767 "name": "raid_bdev1", 00:31:52.767 "uuid": "8e3d418f-d04c-4557-bf47-8be2f1222ee9", 00:31:52.767 "strip_size_kb": 0, 00:31:52.767 "state": "online", 00:31:52.767 "raid_level": "raid1", 00:31:52.767 "superblock": true, 00:31:52.767 "num_base_bdevs": 2, 00:31:52.767 "num_base_bdevs_discovered": 1, 00:31:52.767 "num_base_bdevs_operational": 1, 00:31:52.767 "base_bdevs_list": [ 00:31:52.767 { 00:31:52.767 "name": null, 00:31:52.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.767 "is_configured": false, 00:31:52.767 "data_offset": 256, 00:31:52.767 "data_size": 7936 00:31:52.767 }, 00:31:52.767 { 00:31:52.767 "name": "BaseBdev2", 00:31:52.767 "uuid": "36e21f19-6dbb-5537-8656-7eafbf4bee11", 00:31:52.767 "is_configured": true, 00:31:52.767 "data_offset": 256, 00:31:52.767 "data_size": 7936 00:31:52.767 } 00:31:52.767 ] 00:31:52.767 }' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 1787398 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 1787398 ']' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 1787398 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1787398 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1787398' 00:31:52.767 killing process with pid 1787398 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 1787398 00:31:52.767 Received shutdown signal, test time was about 60.000000 seconds 00:31:52.767 00:31:52.767 Latency(us) 00:31:52.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:52.767 =================================================================================================================== 00:31:52.767 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:52.767 [2024-07-24 16:48:49.549998] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:52.767 [2024-07-24 16:48:49.550150] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:52.767 16:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 1787398 00:31:52.767 [2024-07-24 16:48:49.550212] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:52.767 [2024-07-24 16:48:49.550229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:31:53.025 [2024-07-24 16:48:49.884443] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:54.927 16:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:31:54.927 00:31:54.927 real 0m32.442s 00:31:54.927 user 0m48.505s 00:31:54.927 sys 0m4.985s 00:31:54.927 16:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:54.927 16:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:54.927 ************************************ 00:31:54.927 END TEST raid_rebuild_test_sb_4k 00:31:54.927 ************************************ 00:31:54.927 16:48:51 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:31:54.927 16:48:51 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:31:54.927 16:48:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:54.927 16:48:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:54.927 16:48:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:54.927 ************************************ 00:31:54.927 START TEST raid_state_function_test_sb_md_separate 00:31:54.927 ************************************ 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=1793244 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1793244' 00:31:54.927 Process raid pid: 1793244 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 1793244 /var/tmp/spdk-raid.sock 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 1793244 ']' 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:54.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:54.927 16:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:55.186 [2024-07-24 16:48:51.805752] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:31:55.186 [2024-07-24 16:48:51.805875] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:55.186 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:55.186 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:55.186 [2024-07-24 16:48:52.032955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.752 [2024-07-24 16:48:52.314590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.010 [2024-07-24 16:48:52.668875] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.010 [2024-07-24 16:48:52.668910] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.010 16:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.010 16:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:31:56.010 16:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:56.268 [2024-07-24 16:48:53.070810] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:56.268 [2024-07-24 16:48:53.070873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:56.268 [2024-07-24 16:48:53.070888] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:56.268 [2024-07-24 16:48:53.070905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.268 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:56.526 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:56.526 "name": "Existed_Raid", 00:31:56.526 "uuid": "e36d0cda-ad87-4b60-9da3-7d46441aa27d", 00:31:56.526 "strip_size_kb": 0, 00:31:56.526 "state": "configuring", 00:31:56.526 "raid_level": "raid1", 00:31:56.526 "superblock": true, 00:31:56.526 "num_base_bdevs": 2, 00:31:56.526 "num_base_bdevs_discovered": 0, 00:31:56.526 "num_base_bdevs_operational": 2, 00:31:56.526 "base_bdevs_list": [ 00:31:56.526 { 00:31:56.526 "name": "BaseBdev1", 00:31:56.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.526 "is_configured": false, 00:31:56.526 "data_offset": 0, 00:31:56.526 "data_size": 0 00:31:56.526 }, 00:31:56.526 { 00:31:56.526 "name": "BaseBdev2", 00:31:56.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.526 "is_configured": false, 00:31:56.526 "data_offset": 0, 00:31:56.526 "data_size": 0 00:31:56.526 } 00:31:56.526 ] 00:31:56.526 }' 00:31:56.526 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:56.526 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:57.092 16:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:57.350 [2024-07-24 16:48:54.013363] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:57.350 [2024-07-24 16:48:54.013402] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:31:57.350 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:57.350 [2024-07-24 16:48:54.177837] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:57.350 [2024-07-24 16:48:54.177879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:57.350 [2024-07-24 16:48:54.177897] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:57.350 [2024-07-24 16:48:54.177914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:57.350 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:31:57.607 [2024-07-24 16:48:54.406124] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:57.607 BaseBdev1 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:57.607 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:57.865 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:58.123 [ 00:31:58.123 { 00:31:58.123 "name": "BaseBdev1", 00:31:58.123 "aliases": [ 00:31:58.123 "d8b2120a-3c51-48b5-a140-2e9e71924b5e" 00:31:58.123 ], 00:31:58.123 "product_name": "Malloc disk", 00:31:58.123 "block_size": 4096, 00:31:58.123 "num_blocks": 8192, 00:31:58.123 "uuid": "d8b2120a-3c51-48b5-a140-2e9e71924b5e", 00:31:58.123 "md_size": 32, 00:31:58.123 "md_interleave": false, 00:31:58.123 "dif_type": 0, 00:31:58.123 "assigned_rate_limits": { 00:31:58.123 "rw_ios_per_sec": 0, 00:31:58.123 "rw_mbytes_per_sec": 0, 00:31:58.123 "r_mbytes_per_sec": 0, 00:31:58.123 "w_mbytes_per_sec": 0 00:31:58.123 }, 00:31:58.123 "claimed": true, 00:31:58.123 "claim_type": "exclusive_write", 00:31:58.123 "zoned": false, 00:31:58.123 "supported_io_types": { 00:31:58.123 "read": true, 00:31:58.123 "write": true, 00:31:58.123 "unmap": true, 00:31:58.123 "flush": true, 00:31:58.123 "reset": true, 00:31:58.123 "nvme_admin": false, 00:31:58.123 "nvme_io": false, 00:31:58.123 "nvme_io_md": false, 00:31:58.123 "write_zeroes": true, 00:31:58.123 "zcopy": true, 00:31:58.123 "get_zone_info": false, 00:31:58.123 "zone_management": false, 00:31:58.123 "zone_append": false, 00:31:58.123 "compare": false, 00:31:58.123 "compare_and_write": false, 00:31:58.123 "abort": true, 00:31:58.123 "seek_hole": false, 00:31:58.123 "seek_data": false, 00:31:58.123 "copy": true, 00:31:58.123 "nvme_iov_md": false 00:31:58.123 }, 00:31:58.123 "memory_domains": [ 00:31:58.123 { 00:31:58.123 "dma_device_id": "system", 00:31:58.123 "dma_device_type": 1 00:31:58.123 }, 00:31:58.123 { 00:31:58.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.123 "dma_device_type": 2 00:31:58.123 } 00:31:58.123 ], 00:31:58.123 "driver_specific": {} 00:31:58.123 } 00:31:58.123 ] 00:31:58.123 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:31:58.123 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:58.123 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.124 16:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.383 16:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.383 "name": "Existed_Raid", 00:31:58.383 "uuid": "edd5fbb3-9217-4764-8949-f52dddf84adb", 00:31:58.383 "strip_size_kb": 0, 00:31:58.383 "state": "configuring", 00:31:58.383 "raid_level": "raid1", 00:31:58.383 "superblock": true, 00:31:58.383 "num_base_bdevs": 2, 00:31:58.383 "num_base_bdevs_discovered": 1, 00:31:58.383 "num_base_bdevs_operational": 2, 00:31:58.383 "base_bdevs_list": [ 00:31:58.383 { 00:31:58.383 "name": "BaseBdev1", 00:31:58.383 "uuid": "d8b2120a-3c51-48b5-a140-2e9e71924b5e", 00:31:58.383 "is_configured": true, 00:31:58.383 "data_offset": 256, 00:31:58.383 "data_size": 7936 00:31:58.383 }, 00:31:58.383 { 00:31:58.383 "name": "BaseBdev2", 00:31:58.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.383 "is_configured": false, 00:31:58.383 "data_offset": 0, 00:31:58.383 "data_size": 0 00:31:58.383 } 00:31:58.383 ] 00:31:58.383 }' 00:31:58.383 16:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.383 16:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:31:58.985 16:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:58.985 [2024-07-24 16:48:55.826145] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:58.985 [2024-07-24 16:48:55.826200] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:31:58.985 16:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:59.243 [2024-07-24 16:48:56.050791] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:59.243 [2024-07-24 16:48:56.053090] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:59.243 [2024-07-24 16:48:56.053135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.243 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.501 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.501 "name": "Existed_Raid", 00:31:59.501 "uuid": "2bf58a9f-ccc4-44b6-9d73-c9918c27ca99", 00:31:59.501 "strip_size_kb": 0, 00:31:59.501 "state": "configuring", 00:31:59.501 "raid_level": "raid1", 00:31:59.501 "superblock": true, 00:31:59.501 "num_base_bdevs": 2, 00:31:59.501 "num_base_bdevs_discovered": 1, 00:31:59.501 "num_base_bdevs_operational": 2, 00:31:59.501 "base_bdevs_list": [ 00:31:59.501 { 00:31:59.501 "name": "BaseBdev1", 00:31:59.501 "uuid": "d8b2120a-3c51-48b5-a140-2e9e71924b5e", 00:31:59.501 "is_configured": true, 00:31:59.501 "data_offset": 256, 00:31:59.501 "data_size": 7936 00:31:59.501 }, 00:31:59.501 { 00:31:59.501 "name": "BaseBdev2", 00:31:59.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.501 "is_configured": false, 00:31:59.501 "data_offset": 0, 00:31:59.501 "data_size": 0 00:31:59.501 } 00:31:59.501 ] 00:31:59.501 }' 00:31:59.501 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.501 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:00.066 16:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:00.324 [2024-07-24 16:48:57.087280] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:00.324 [2024-07-24 16:48:57.087530] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:32:00.324 [2024-07-24 16:48:57.087551] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:00.324 [2024-07-24 16:48:57.087656] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:32:00.324 [2024-07-24 16:48:57.087841] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:32:00.324 [2024-07-24 16:48:57.087861] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:32:00.324 [2024-07-24 16:48:57.087991] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:00.324 BaseBdev2 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:00.324 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:00.581 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:00.839 [ 00:32:00.839 { 00:32:00.839 "name": "BaseBdev2", 00:32:00.839 "aliases": [ 00:32:00.839 "88b7821a-601b-42dc-a1a2-309f2a9085c9" 00:32:00.839 ], 00:32:00.839 "product_name": "Malloc disk", 00:32:00.839 "block_size": 4096, 00:32:00.839 "num_blocks": 8192, 00:32:00.839 "uuid": "88b7821a-601b-42dc-a1a2-309f2a9085c9", 00:32:00.839 "md_size": 32, 00:32:00.839 "md_interleave": false, 00:32:00.839 "dif_type": 0, 00:32:00.840 "assigned_rate_limits": { 00:32:00.840 "rw_ios_per_sec": 0, 00:32:00.840 "rw_mbytes_per_sec": 0, 00:32:00.840 "r_mbytes_per_sec": 0, 00:32:00.840 "w_mbytes_per_sec": 0 00:32:00.840 }, 00:32:00.840 "claimed": true, 00:32:00.840 "claim_type": "exclusive_write", 00:32:00.840 "zoned": false, 00:32:00.840 "supported_io_types": { 00:32:00.840 "read": true, 00:32:00.840 "write": true, 00:32:00.840 "unmap": true, 00:32:00.840 "flush": true, 00:32:00.840 "reset": true, 00:32:00.840 "nvme_admin": false, 00:32:00.840 "nvme_io": false, 00:32:00.840 "nvme_io_md": false, 00:32:00.840 "write_zeroes": true, 00:32:00.840 "zcopy": true, 00:32:00.840 "get_zone_info": false, 00:32:00.840 "zone_management": false, 00:32:00.840 "zone_append": false, 00:32:00.840 "compare": false, 00:32:00.840 "compare_and_write": false, 00:32:00.840 "abort": true, 00:32:00.840 "seek_hole": false, 00:32:00.840 "seek_data": false, 00:32:00.840 "copy": true, 00:32:00.840 "nvme_iov_md": false 00:32:00.840 }, 00:32:00.840 "memory_domains": [ 00:32:00.840 { 00:32:00.840 "dma_device_id": "system", 00:32:00.840 "dma_device_type": 1 00:32:00.840 }, 00:32:00.840 { 00:32:00.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:00.840 "dma_device_type": 2 00:32:00.840 } 00:32:00.840 ], 00:32:00.840 "driver_specific": {} 00:32:00.840 } 00:32:00.840 ] 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.840 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.098 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:01.098 "name": "Existed_Raid", 00:32:01.098 "uuid": "2bf58a9f-ccc4-44b6-9d73-c9918c27ca99", 00:32:01.098 "strip_size_kb": 0, 00:32:01.098 "state": "online", 00:32:01.098 "raid_level": "raid1", 00:32:01.098 "superblock": true, 00:32:01.098 "num_base_bdevs": 2, 00:32:01.098 "num_base_bdevs_discovered": 2, 00:32:01.098 "num_base_bdevs_operational": 2, 00:32:01.098 "base_bdevs_list": [ 00:32:01.098 { 00:32:01.098 "name": "BaseBdev1", 00:32:01.098 "uuid": "d8b2120a-3c51-48b5-a140-2e9e71924b5e", 00:32:01.098 "is_configured": true, 00:32:01.098 "data_offset": 256, 00:32:01.098 "data_size": 7936 00:32:01.098 }, 00:32:01.098 { 00:32:01.098 "name": "BaseBdev2", 00:32:01.098 "uuid": "88b7821a-601b-42dc-a1a2-309f2a9085c9", 00:32:01.098 "is_configured": true, 00:32:01.098 "data_offset": 256, 00:32:01.098 "data_size": 7936 00:32:01.098 } 00:32:01.098 ] 00:32:01.098 }' 00:32:01.098 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:01.098 16:48:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:01.662 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:01.921 [2024-07-24 16:48:58.599818] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:01.921 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:01.921 "name": "Existed_Raid", 00:32:01.921 "aliases": [ 00:32:01.921 "2bf58a9f-ccc4-44b6-9d73-c9918c27ca99" 00:32:01.921 ], 00:32:01.921 "product_name": "Raid Volume", 00:32:01.921 "block_size": 4096, 00:32:01.921 "num_blocks": 7936, 00:32:01.921 "uuid": "2bf58a9f-ccc4-44b6-9d73-c9918c27ca99", 00:32:01.921 "md_size": 32, 00:32:01.921 "md_interleave": false, 00:32:01.921 "dif_type": 0, 00:32:01.921 "assigned_rate_limits": { 00:32:01.921 "rw_ios_per_sec": 0, 00:32:01.921 "rw_mbytes_per_sec": 0, 00:32:01.921 "r_mbytes_per_sec": 0, 00:32:01.921 "w_mbytes_per_sec": 0 00:32:01.921 }, 00:32:01.921 "claimed": false, 00:32:01.921 "zoned": false, 00:32:01.921 "supported_io_types": { 00:32:01.921 "read": true, 00:32:01.921 "write": true, 00:32:01.921 "unmap": false, 00:32:01.921 "flush": false, 00:32:01.921 "reset": true, 00:32:01.921 "nvme_admin": false, 00:32:01.921 "nvme_io": false, 00:32:01.921 "nvme_io_md": false, 00:32:01.921 "write_zeroes": true, 00:32:01.921 "zcopy": false, 00:32:01.921 "get_zone_info": false, 00:32:01.921 "zone_management": false, 00:32:01.921 "zone_append": false, 00:32:01.921 "compare": false, 00:32:01.921 "compare_and_write": false, 00:32:01.921 "abort": false, 00:32:01.921 "seek_hole": false, 00:32:01.921 "seek_data": false, 00:32:01.921 "copy": false, 00:32:01.921 "nvme_iov_md": false 00:32:01.921 }, 00:32:01.921 "memory_domains": [ 00:32:01.921 { 00:32:01.921 "dma_device_id": "system", 00:32:01.921 "dma_device_type": 1 00:32:01.921 }, 00:32:01.921 { 00:32:01.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.921 "dma_device_type": 2 00:32:01.921 }, 00:32:01.921 { 00:32:01.921 "dma_device_id": "system", 00:32:01.921 "dma_device_type": 1 00:32:01.921 }, 00:32:01.921 { 00:32:01.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:01.921 "dma_device_type": 2 00:32:01.921 } 00:32:01.921 ], 00:32:01.921 "driver_specific": { 00:32:01.921 "raid": { 00:32:01.921 "uuid": "2bf58a9f-ccc4-44b6-9d73-c9918c27ca99", 00:32:01.921 "strip_size_kb": 0, 00:32:01.921 "state": "online", 00:32:01.921 "raid_level": "raid1", 00:32:01.921 "superblock": true, 00:32:01.921 "num_base_bdevs": 2, 00:32:01.921 "num_base_bdevs_discovered": 2, 00:32:01.921 "num_base_bdevs_operational": 2, 00:32:01.921 "base_bdevs_list": [ 00:32:01.921 { 00:32:01.921 "name": "BaseBdev1", 00:32:01.921 "uuid": "d8b2120a-3c51-48b5-a140-2e9e71924b5e", 00:32:01.921 "is_configured": true, 00:32:01.921 "data_offset": 256, 00:32:01.921 "data_size": 7936 00:32:01.921 }, 00:32:01.921 { 00:32:01.921 "name": "BaseBdev2", 00:32:01.921 "uuid": "88b7821a-601b-42dc-a1a2-309f2a9085c9", 00:32:01.921 "is_configured": true, 00:32:01.921 "data_offset": 256, 00:32:01.921 "data_size": 7936 00:32:01.921 } 00:32:01.921 ] 00:32:01.921 } 00:32:01.921 } 00:32:01.921 }' 00:32:01.921 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:01.921 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:01.921 BaseBdev2' 00:32:01.921 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:01.921 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:01.921 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:02.179 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:02.179 "name": "BaseBdev1", 00:32:02.179 "aliases": [ 00:32:02.179 "d8b2120a-3c51-48b5-a140-2e9e71924b5e" 00:32:02.179 ], 00:32:02.179 "product_name": "Malloc disk", 00:32:02.179 "block_size": 4096, 00:32:02.179 "num_blocks": 8192, 00:32:02.179 "uuid": "d8b2120a-3c51-48b5-a140-2e9e71924b5e", 00:32:02.179 "md_size": 32, 00:32:02.179 "md_interleave": false, 00:32:02.179 "dif_type": 0, 00:32:02.179 "assigned_rate_limits": { 00:32:02.179 "rw_ios_per_sec": 0, 00:32:02.179 "rw_mbytes_per_sec": 0, 00:32:02.179 "r_mbytes_per_sec": 0, 00:32:02.179 "w_mbytes_per_sec": 0 00:32:02.179 }, 00:32:02.179 "claimed": true, 00:32:02.179 "claim_type": "exclusive_write", 00:32:02.179 "zoned": false, 00:32:02.179 "supported_io_types": { 00:32:02.179 "read": true, 00:32:02.179 "write": true, 00:32:02.179 "unmap": true, 00:32:02.179 "flush": true, 00:32:02.179 "reset": true, 00:32:02.179 "nvme_admin": false, 00:32:02.179 "nvme_io": false, 00:32:02.179 "nvme_io_md": false, 00:32:02.179 "write_zeroes": true, 00:32:02.179 "zcopy": true, 00:32:02.179 "get_zone_info": false, 00:32:02.179 "zone_management": false, 00:32:02.179 "zone_append": false, 00:32:02.179 "compare": false, 00:32:02.179 "compare_and_write": false, 00:32:02.179 "abort": true, 00:32:02.179 "seek_hole": false, 00:32:02.179 "seek_data": false, 00:32:02.179 "copy": true, 00:32:02.179 "nvme_iov_md": false 00:32:02.179 }, 00:32:02.179 "memory_domains": [ 00:32:02.179 { 00:32:02.179 "dma_device_id": "system", 00:32:02.179 "dma_device_type": 1 00:32:02.179 }, 00:32:02.179 { 00:32:02.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.179 "dma_device_type": 2 00:32:02.179 } 00:32:02.179 ], 00:32:02.179 "driver_specific": {} 00:32:02.179 }' 00:32:02.179 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:02.179 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:02.180 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:02.180 16:48:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:02.180 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:02.438 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:02.696 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:02.696 "name": "BaseBdev2", 00:32:02.696 "aliases": [ 00:32:02.696 "88b7821a-601b-42dc-a1a2-309f2a9085c9" 00:32:02.696 ], 00:32:02.696 "product_name": "Malloc disk", 00:32:02.696 "block_size": 4096, 00:32:02.697 "num_blocks": 8192, 00:32:02.697 "uuid": "88b7821a-601b-42dc-a1a2-309f2a9085c9", 00:32:02.697 "md_size": 32, 00:32:02.697 "md_interleave": false, 00:32:02.697 "dif_type": 0, 00:32:02.697 "assigned_rate_limits": { 00:32:02.697 "rw_ios_per_sec": 0, 00:32:02.697 "rw_mbytes_per_sec": 0, 00:32:02.697 "r_mbytes_per_sec": 0, 00:32:02.697 "w_mbytes_per_sec": 0 00:32:02.697 }, 00:32:02.697 "claimed": true, 00:32:02.697 "claim_type": "exclusive_write", 00:32:02.697 "zoned": false, 00:32:02.697 "supported_io_types": { 00:32:02.697 "read": true, 00:32:02.697 "write": true, 00:32:02.697 "unmap": true, 00:32:02.697 "flush": true, 00:32:02.697 "reset": true, 00:32:02.697 "nvme_admin": false, 00:32:02.697 "nvme_io": false, 00:32:02.697 "nvme_io_md": false, 00:32:02.697 "write_zeroes": true, 00:32:02.697 "zcopy": true, 00:32:02.697 "get_zone_info": false, 00:32:02.697 "zone_management": false, 00:32:02.697 "zone_append": false, 00:32:02.697 "compare": false, 00:32:02.697 "compare_and_write": false, 00:32:02.697 "abort": true, 00:32:02.697 "seek_hole": false, 00:32:02.697 "seek_data": false, 00:32:02.697 "copy": true, 00:32:02.697 "nvme_iov_md": false 00:32:02.697 }, 00:32:02.697 "memory_domains": [ 00:32:02.697 { 00:32:02.697 "dma_device_id": "system", 00:32:02.697 "dma_device_type": 1 00:32:02.697 }, 00:32:02.697 { 00:32:02.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:02.697 "dma_device_type": 2 00:32:02.697 } 00:32:02.697 ], 00:32:02.697 "driver_specific": {} 00:32:02.697 }' 00:32:02.697 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:02.697 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:02.955 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:03.214 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:03.214 16:48:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:03.473 [2024-07-24 16:49:00.079549] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.473 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:03.731 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:03.731 "name": "Existed_Raid", 00:32:03.731 "uuid": "2bf58a9f-ccc4-44b6-9d73-c9918c27ca99", 00:32:03.731 "strip_size_kb": 0, 00:32:03.731 "state": "online", 00:32:03.731 "raid_level": "raid1", 00:32:03.731 "superblock": true, 00:32:03.731 "num_base_bdevs": 2, 00:32:03.731 "num_base_bdevs_discovered": 1, 00:32:03.731 "num_base_bdevs_operational": 1, 00:32:03.731 "base_bdevs_list": [ 00:32:03.731 { 00:32:03.731 "name": null, 00:32:03.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:03.731 "is_configured": false, 00:32:03.731 "data_offset": 256, 00:32:03.731 "data_size": 7936 00:32:03.731 }, 00:32:03.731 { 00:32:03.731 "name": "BaseBdev2", 00:32:03.731 "uuid": "88b7821a-601b-42dc-a1a2-309f2a9085c9", 00:32:03.731 "is_configured": true, 00:32:03.731 "data_offset": 256, 00:32:03.731 "data_size": 7936 00:32:03.731 } 00:32:03.731 ] 00:32:03.731 }' 00:32:03.731 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:03.731 16:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:04.297 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:04.297 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:04.297 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.297 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:04.556 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:04.556 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:04.556 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:05.121 [2024-07-24 16:49:01.729695] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:05.121 [2024-07-24 16:49:01.729812] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:05.121 [2024-07-24 16:49:01.869240] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:05.121 [2024-07-24 16:49:01.869294] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:05.121 [2024-07-24 16:49:01.869313] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:32:05.121 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:05.121 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:05.121 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.121 16:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 1793244 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 1793244 ']' 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 1793244 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1793244 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1793244' 00:32:05.379 killing process with pid 1793244 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 1793244 00:32:05.379 [2024-07-24 16:49:02.185503] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:05.379 16:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 1793244 00:32:05.379 [2024-07-24 16:49:02.208816] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:07.277 16:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:32:07.277 00:32:07.277 real 0m12.211s 00:32:07.277 user 0m19.894s 00:32:07.277 sys 0m2.087s 00:32:07.277 16:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.277 16:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:07.277 ************************************ 00:32:07.277 END TEST raid_state_function_test_sb_md_separate 00:32:07.277 ************************************ 00:32:07.277 16:49:03 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:32:07.277 16:49:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:07.278 16:49:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.278 16:49:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:07.278 ************************************ 00:32:07.278 START TEST raid_superblock_test_md_separate 00:32:07.278 ************************************ 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=1795455 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 1795455 /var/tmp/spdk-raid.sock 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 1795455 ']' 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:07.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:07.278 16:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:07.278 [2024-07-24 16:49:04.081821] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:32:07.278 [2024-07-24 16:49:04.081939] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795455 ] 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:07.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:07.536 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:07.536 [2024-07-24 16:49:04.304700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.794 [2024-07-24 16:49:04.592892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:08.359 [2024-07-24 16:49:04.935255] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:08.359 [2024-07-24 16:49:04.935289] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:08.359 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:32:08.923 malloc1 00:32:08.923 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:09.181 [2024-07-24 16:49:05.905861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:09.181 [2024-07-24 16:49:05.905929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:09.181 [2024-07-24 16:49:05.905961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:32:09.181 [2024-07-24 16:49:05.905977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:09.181 [2024-07-24 16:49:05.908492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:09.181 [2024-07-24 16:49:05.908526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:09.181 pt1 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:09.181 16:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:32:09.745 malloc2 00:32:09.745 16:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:10.340 [2024-07-24 16:49:06.959166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:10.340 [2024-07-24 16:49:06.959230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.340 [2024-07-24 16:49:06.959259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:32:10.340 [2024-07-24 16:49:06.959276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.340 [2024-07-24 16:49:06.961762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.340 [2024-07-24 16:49:06.961796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:10.340 pt2 00:32:10.340 16:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:10.340 16:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:10.340 16:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:10.341 [2024-07-24 16:49:07.195808] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:10.341 [2024-07-24 16:49:07.198178] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:10.341 [2024-07-24 16:49:07.198426] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:32:10.341 [2024-07-24 16:49:07.198444] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:10.341 [2024-07-24 16:49:07.198558] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:32:10.341 [2024-07-24 16:49:07.198757] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:32:10.341 [2024-07-24 16:49:07.198775] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:32:10.341 [2024-07-24 16:49:07.198915] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:10.597 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:10.598 "name": "raid_bdev1", 00:32:10.598 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:10.598 "strip_size_kb": 0, 00:32:10.598 "state": "online", 00:32:10.598 "raid_level": "raid1", 00:32:10.598 "superblock": true, 00:32:10.598 "num_base_bdevs": 2, 00:32:10.598 "num_base_bdevs_discovered": 2, 00:32:10.598 "num_base_bdevs_operational": 2, 00:32:10.598 "base_bdevs_list": [ 00:32:10.598 { 00:32:10.598 "name": "pt1", 00:32:10.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:10.598 "is_configured": true, 00:32:10.598 "data_offset": 256, 00:32:10.598 "data_size": 7936 00:32:10.598 }, 00:32:10.598 { 00:32:10.598 "name": "pt2", 00:32:10.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:10.598 "is_configured": true, 00:32:10.598 "data_offset": 256, 00:32:10.598 "data_size": 7936 00:32:10.598 } 00:32:10.598 ] 00:32:10.598 }' 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:10.598 16:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:11.527 [2024-07-24 16:49:08.238929] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:11.527 "name": "raid_bdev1", 00:32:11.527 "aliases": [ 00:32:11.527 "31fd48c4-a888-4f20-93aa-db1447275a91" 00:32:11.527 ], 00:32:11.527 "product_name": "Raid Volume", 00:32:11.527 "block_size": 4096, 00:32:11.527 "num_blocks": 7936, 00:32:11.527 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:11.527 "md_size": 32, 00:32:11.527 "md_interleave": false, 00:32:11.527 "dif_type": 0, 00:32:11.527 "assigned_rate_limits": { 00:32:11.527 "rw_ios_per_sec": 0, 00:32:11.527 "rw_mbytes_per_sec": 0, 00:32:11.527 "r_mbytes_per_sec": 0, 00:32:11.527 "w_mbytes_per_sec": 0 00:32:11.527 }, 00:32:11.527 "claimed": false, 00:32:11.527 "zoned": false, 00:32:11.527 "supported_io_types": { 00:32:11.527 "read": true, 00:32:11.527 "write": true, 00:32:11.527 "unmap": false, 00:32:11.527 "flush": false, 00:32:11.527 "reset": true, 00:32:11.527 "nvme_admin": false, 00:32:11.527 "nvme_io": false, 00:32:11.527 "nvme_io_md": false, 00:32:11.527 "write_zeroes": true, 00:32:11.527 "zcopy": false, 00:32:11.527 "get_zone_info": false, 00:32:11.527 "zone_management": false, 00:32:11.527 "zone_append": false, 00:32:11.527 "compare": false, 00:32:11.527 "compare_and_write": false, 00:32:11.527 "abort": false, 00:32:11.527 "seek_hole": false, 00:32:11.527 "seek_data": false, 00:32:11.527 "copy": false, 00:32:11.527 "nvme_iov_md": false 00:32:11.527 }, 00:32:11.527 "memory_domains": [ 00:32:11.527 { 00:32:11.527 "dma_device_id": "system", 00:32:11.527 "dma_device_type": 1 00:32:11.527 }, 00:32:11.527 { 00:32:11.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.527 "dma_device_type": 2 00:32:11.527 }, 00:32:11.527 { 00:32:11.527 "dma_device_id": "system", 00:32:11.527 "dma_device_type": 1 00:32:11.527 }, 00:32:11.527 { 00:32:11.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.527 "dma_device_type": 2 00:32:11.527 } 00:32:11.527 ], 00:32:11.527 "driver_specific": { 00:32:11.527 "raid": { 00:32:11.527 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:11.527 "strip_size_kb": 0, 00:32:11.527 "state": "online", 00:32:11.527 "raid_level": "raid1", 00:32:11.527 "superblock": true, 00:32:11.527 "num_base_bdevs": 2, 00:32:11.527 "num_base_bdevs_discovered": 2, 00:32:11.527 "num_base_bdevs_operational": 2, 00:32:11.527 "base_bdevs_list": [ 00:32:11.527 { 00:32:11.527 "name": "pt1", 00:32:11.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:11.527 "is_configured": true, 00:32:11.527 "data_offset": 256, 00:32:11.527 "data_size": 7936 00:32:11.527 }, 00:32:11.527 { 00:32:11.527 "name": "pt2", 00:32:11.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:11.527 "is_configured": true, 00:32:11.527 "data_offset": 256, 00:32:11.527 "data_size": 7936 00:32:11.527 } 00:32:11.527 ] 00:32:11.527 } 00:32:11.527 } 00:32:11.527 }' 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:11.527 pt2' 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:11.527 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:11.784 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:11.784 "name": "pt1", 00:32:11.784 "aliases": [ 00:32:11.784 "00000000-0000-0000-0000-000000000001" 00:32:11.784 ], 00:32:11.784 "product_name": "passthru", 00:32:11.784 "block_size": 4096, 00:32:11.784 "num_blocks": 8192, 00:32:11.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:11.784 "md_size": 32, 00:32:11.784 "md_interleave": false, 00:32:11.784 "dif_type": 0, 00:32:11.784 "assigned_rate_limits": { 00:32:11.784 "rw_ios_per_sec": 0, 00:32:11.784 "rw_mbytes_per_sec": 0, 00:32:11.784 "r_mbytes_per_sec": 0, 00:32:11.784 "w_mbytes_per_sec": 0 00:32:11.784 }, 00:32:11.784 "claimed": true, 00:32:11.784 "claim_type": "exclusive_write", 00:32:11.784 "zoned": false, 00:32:11.784 "supported_io_types": { 00:32:11.784 "read": true, 00:32:11.784 "write": true, 00:32:11.784 "unmap": true, 00:32:11.784 "flush": true, 00:32:11.784 "reset": true, 00:32:11.784 "nvme_admin": false, 00:32:11.784 "nvme_io": false, 00:32:11.784 "nvme_io_md": false, 00:32:11.784 "write_zeroes": true, 00:32:11.784 "zcopy": true, 00:32:11.784 "get_zone_info": false, 00:32:11.784 "zone_management": false, 00:32:11.784 "zone_append": false, 00:32:11.784 "compare": false, 00:32:11.784 "compare_and_write": false, 00:32:11.784 "abort": true, 00:32:11.784 "seek_hole": false, 00:32:11.784 "seek_data": false, 00:32:11.784 "copy": true, 00:32:11.784 "nvme_iov_md": false 00:32:11.784 }, 00:32:11.784 "memory_domains": [ 00:32:11.784 { 00:32:11.784 "dma_device_id": "system", 00:32:11.785 "dma_device_type": 1 00:32:11.785 }, 00:32:11.785 { 00:32:11.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:11.785 "dma_device_type": 2 00:32:11.785 } 00:32:11.785 ], 00:32:11.785 "driver_specific": { 00:32:11.785 "passthru": { 00:32:11.785 "name": "pt1", 00:32:11.785 "base_bdev_name": "malloc1" 00:32:11.785 } 00:32:11.785 } 00:32:11.785 }' 00:32:11.785 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:11.785 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:11.785 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:11.785 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:12.042 16:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:12.299 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:12.299 "name": "pt2", 00:32:12.299 "aliases": [ 00:32:12.299 "00000000-0000-0000-0000-000000000002" 00:32:12.300 ], 00:32:12.300 "product_name": "passthru", 00:32:12.300 "block_size": 4096, 00:32:12.300 "num_blocks": 8192, 00:32:12.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:12.300 "md_size": 32, 00:32:12.300 "md_interleave": false, 00:32:12.300 "dif_type": 0, 00:32:12.300 "assigned_rate_limits": { 00:32:12.300 "rw_ios_per_sec": 0, 00:32:12.300 "rw_mbytes_per_sec": 0, 00:32:12.300 "r_mbytes_per_sec": 0, 00:32:12.300 "w_mbytes_per_sec": 0 00:32:12.300 }, 00:32:12.300 "claimed": true, 00:32:12.300 "claim_type": "exclusive_write", 00:32:12.300 "zoned": false, 00:32:12.300 "supported_io_types": { 00:32:12.300 "read": true, 00:32:12.300 "write": true, 00:32:12.300 "unmap": true, 00:32:12.300 "flush": true, 00:32:12.300 "reset": true, 00:32:12.300 "nvme_admin": false, 00:32:12.300 "nvme_io": false, 00:32:12.300 "nvme_io_md": false, 00:32:12.300 "write_zeroes": true, 00:32:12.300 "zcopy": true, 00:32:12.300 "get_zone_info": false, 00:32:12.300 "zone_management": false, 00:32:12.300 "zone_append": false, 00:32:12.300 "compare": false, 00:32:12.300 "compare_and_write": false, 00:32:12.300 "abort": true, 00:32:12.300 "seek_hole": false, 00:32:12.300 "seek_data": false, 00:32:12.300 "copy": true, 00:32:12.300 "nvme_iov_md": false 00:32:12.300 }, 00:32:12.300 "memory_domains": [ 00:32:12.300 { 00:32:12.300 "dma_device_id": "system", 00:32:12.300 "dma_device_type": 1 00:32:12.300 }, 00:32:12.300 { 00:32:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:12.300 "dma_device_type": 2 00:32:12.300 } 00:32:12.300 ], 00:32:12.300 "driver_specific": { 00:32:12.300 "passthru": { 00:32:12.300 "name": "pt2", 00:32:12.300 "base_bdev_name": "malloc2" 00:32:12.300 } 00:32:12.300 } 00:32:12.300 }' 00:32:12.300 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:12.300 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:12.557 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.815 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:12.815 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:12.815 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:12.815 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:32:12.815 [2024-07-24 16:49:09.662808] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:13.072 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=31fd48c4-a888-4f20-93aa-db1447275a91 00:32:13.072 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 31fd48c4-a888-4f20-93aa-db1447275a91 ']' 00:32:13.072 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:13.072 [2024-07-24 16:49:09.887068] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:13.072 [2024-07-24 16:49:09.887101] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:13.072 [2024-07-24 16:49:09.887194] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:13.072 [2024-07-24 16:49:09.887265] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:13.072 [2024-07-24 16:49:09.887299] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:32:13.072 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.072 16:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:32:13.329 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:32:13.329 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:32:13.329 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:32:13.329 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:13.586 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:32:13.586 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:13.843 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:13.843 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:14.156 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:32:14.156 16:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:14.156 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:32:14.156 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:14.156 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:14.156 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:14.157 16:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:14.424 [2024-07-24 16:49:11.022190] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:14.424 [2024-07-24 16:49:11.024527] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:14.424 [2024-07-24 16:49:11.024605] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:14.424 [2024-07-24 16:49:11.024665] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:14.424 [2024-07-24 16:49:11.024690] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:14.424 [2024-07-24 16:49:11.024706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:32:14.424 request: 00:32:14.424 { 00:32:14.424 "name": "raid_bdev1", 00:32:14.424 "raid_level": "raid1", 00:32:14.424 "base_bdevs": [ 00:32:14.424 "malloc1", 00:32:14.424 "malloc2" 00:32:14.424 ], 00:32:14.424 "superblock": false, 00:32:14.424 "method": "bdev_raid_create", 00:32:14.424 "req_id": 1 00:32:14.424 } 00:32:14.424 Got JSON-RPC error response 00:32:14.424 response: 00:32:14.424 { 00:32:14.424 "code": -17, 00:32:14.424 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:14.424 } 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:32:14.424 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:14.682 [2024-07-24 16:49:11.479363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:14.682 [2024-07-24 16:49:11.479432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:14.682 [2024-07-24 16:49:11.479456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:32:14.682 [2024-07-24 16:49:11.479474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:14.682 [2024-07-24 16:49:11.482028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:14.682 [2024-07-24 16:49:11.482065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:14.682 [2024-07-24 16:49:11.482127] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:14.682 [2024-07-24 16:49:11.482205] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:14.682 pt1 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.682 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.939 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:14.939 "name": "raid_bdev1", 00:32:14.939 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:14.939 "strip_size_kb": 0, 00:32:14.939 "state": "configuring", 00:32:14.939 "raid_level": "raid1", 00:32:14.939 "superblock": true, 00:32:14.939 "num_base_bdevs": 2, 00:32:14.939 "num_base_bdevs_discovered": 1, 00:32:14.939 "num_base_bdevs_operational": 2, 00:32:14.939 "base_bdevs_list": [ 00:32:14.939 { 00:32:14.939 "name": "pt1", 00:32:14.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:14.940 "is_configured": true, 00:32:14.940 "data_offset": 256, 00:32:14.940 "data_size": 7936 00:32:14.940 }, 00:32:14.940 { 00:32:14.940 "name": null, 00:32:14.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:14.940 "is_configured": false, 00:32:14.940 "data_offset": 256, 00:32:14.940 "data_size": 7936 00:32:14.940 } 00:32:14.940 ] 00:32:14.940 }' 00:32:14.940 16:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:14.940 16:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:15.504 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:32:15.504 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:32:15.504 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:32:15.504 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:15.762 [2024-07-24 16:49:12.466025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:15.762 [2024-07-24 16:49:12.466091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:15.762 [2024-07-24 16:49:12.466117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:32:15.762 [2024-07-24 16:49:12.466149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:15.762 [2024-07-24 16:49:12.466446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:15.762 [2024-07-24 16:49:12.466470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:15.762 [2024-07-24 16:49:12.466527] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:15.762 [2024-07-24 16:49:12.466559] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:15.762 [2024-07-24 16:49:12.466717] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:32:15.762 [2024-07-24 16:49:12.466734] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:15.762 [2024-07-24 16:49:12.466817] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:32:15.762 [2024-07-24 16:49:12.467009] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:32:15.762 [2024-07-24 16:49:12.467023] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:32:15.762 [2024-07-24 16:49:12.467170] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:15.762 pt2 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:15.762 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:15.763 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:15.763 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:15.763 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.763 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.020 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:16.020 "name": "raid_bdev1", 00:32:16.020 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:16.020 "strip_size_kb": 0, 00:32:16.020 "state": "online", 00:32:16.020 "raid_level": "raid1", 00:32:16.020 "superblock": true, 00:32:16.020 "num_base_bdevs": 2, 00:32:16.020 "num_base_bdevs_discovered": 2, 00:32:16.020 "num_base_bdevs_operational": 2, 00:32:16.020 "base_bdevs_list": [ 00:32:16.020 { 00:32:16.020 "name": "pt1", 00:32:16.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:16.020 "is_configured": true, 00:32:16.020 "data_offset": 256, 00:32:16.020 "data_size": 7936 00:32:16.020 }, 00:32:16.020 { 00:32:16.020 "name": "pt2", 00:32:16.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:16.020 "is_configured": true, 00:32:16.020 "data_offset": 256, 00:32:16.020 "data_size": 7936 00:32:16.020 } 00:32:16.020 ] 00:32:16.020 }' 00:32:16.020 16:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:16.020 16:49:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:16.586 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:16.586 [2024-07-24 16:49:13.437076] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:16.844 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:16.844 "name": "raid_bdev1", 00:32:16.844 "aliases": [ 00:32:16.844 "31fd48c4-a888-4f20-93aa-db1447275a91" 00:32:16.844 ], 00:32:16.844 "product_name": "Raid Volume", 00:32:16.844 "block_size": 4096, 00:32:16.844 "num_blocks": 7936, 00:32:16.844 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:16.844 "md_size": 32, 00:32:16.844 "md_interleave": false, 00:32:16.844 "dif_type": 0, 00:32:16.844 "assigned_rate_limits": { 00:32:16.844 "rw_ios_per_sec": 0, 00:32:16.844 "rw_mbytes_per_sec": 0, 00:32:16.844 "r_mbytes_per_sec": 0, 00:32:16.844 "w_mbytes_per_sec": 0 00:32:16.844 }, 00:32:16.844 "claimed": false, 00:32:16.844 "zoned": false, 00:32:16.844 "supported_io_types": { 00:32:16.844 "read": true, 00:32:16.844 "write": true, 00:32:16.844 "unmap": false, 00:32:16.844 "flush": false, 00:32:16.844 "reset": true, 00:32:16.844 "nvme_admin": false, 00:32:16.844 "nvme_io": false, 00:32:16.844 "nvme_io_md": false, 00:32:16.844 "write_zeroes": true, 00:32:16.844 "zcopy": false, 00:32:16.844 "get_zone_info": false, 00:32:16.844 "zone_management": false, 00:32:16.844 "zone_append": false, 00:32:16.844 "compare": false, 00:32:16.844 "compare_and_write": false, 00:32:16.844 "abort": false, 00:32:16.844 "seek_hole": false, 00:32:16.844 "seek_data": false, 00:32:16.844 "copy": false, 00:32:16.844 "nvme_iov_md": false 00:32:16.844 }, 00:32:16.844 "memory_domains": [ 00:32:16.844 { 00:32:16.844 "dma_device_id": "system", 00:32:16.844 "dma_device_type": 1 00:32:16.844 }, 00:32:16.844 { 00:32:16.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.844 "dma_device_type": 2 00:32:16.844 }, 00:32:16.844 { 00:32:16.844 "dma_device_id": "system", 00:32:16.844 "dma_device_type": 1 00:32:16.844 }, 00:32:16.844 { 00:32:16.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.844 "dma_device_type": 2 00:32:16.844 } 00:32:16.844 ], 00:32:16.844 "driver_specific": { 00:32:16.844 "raid": { 00:32:16.844 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:16.844 "strip_size_kb": 0, 00:32:16.844 "state": "online", 00:32:16.844 "raid_level": "raid1", 00:32:16.844 "superblock": true, 00:32:16.844 "num_base_bdevs": 2, 00:32:16.844 "num_base_bdevs_discovered": 2, 00:32:16.844 "num_base_bdevs_operational": 2, 00:32:16.844 "base_bdevs_list": [ 00:32:16.844 { 00:32:16.844 "name": "pt1", 00:32:16.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:16.844 "is_configured": true, 00:32:16.844 "data_offset": 256, 00:32:16.844 "data_size": 7936 00:32:16.844 }, 00:32:16.844 { 00:32:16.844 "name": "pt2", 00:32:16.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:16.844 "is_configured": true, 00:32:16.844 "data_offset": 256, 00:32:16.844 "data_size": 7936 00:32:16.844 } 00:32:16.844 ] 00:32:16.844 } 00:32:16.844 } 00:32:16.844 }' 00:32:16.844 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:16.844 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:16.844 pt2' 00:32:16.844 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:16.844 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:16.844 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:17.102 "name": "pt1", 00:32:17.102 "aliases": [ 00:32:17.102 "00000000-0000-0000-0000-000000000001" 00:32:17.102 ], 00:32:17.102 "product_name": "passthru", 00:32:17.102 "block_size": 4096, 00:32:17.102 "num_blocks": 8192, 00:32:17.102 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:17.102 "md_size": 32, 00:32:17.102 "md_interleave": false, 00:32:17.102 "dif_type": 0, 00:32:17.102 "assigned_rate_limits": { 00:32:17.102 "rw_ios_per_sec": 0, 00:32:17.102 "rw_mbytes_per_sec": 0, 00:32:17.102 "r_mbytes_per_sec": 0, 00:32:17.102 "w_mbytes_per_sec": 0 00:32:17.102 }, 00:32:17.102 "claimed": true, 00:32:17.102 "claim_type": "exclusive_write", 00:32:17.102 "zoned": false, 00:32:17.102 "supported_io_types": { 00:32:17.102 "read": true, 00:32:17.102 "write": true, 00:32:17.102 "unmap": true, 00:32:17.102 "flush": true, 00:32:17.102 "reset": true, 00:32:17.102 "nvme_admin": false, 00:32:17.102 "nvme_io": false, 00:32:17.102 "nvme_io_md": false, 00:32:17.102 "write_zeroes": true, 00:32:17.102 "zcopy": true, 00:32:17.102 "get_zone_info": false, 00:32:17.102 "zone_management": false, 00:32:17.102 "zone_append": false, 00:32:17.102 "compare": false, 00:32:17.102 "compare_and_write": false, 00:32:17.102 "abort": true, 00:32:17.102 "seek_hole": false, 00:32:17.102 "seek_data": false, 00:32:17.102 "copy": true, 00:32:17.102 "nvme_iov_md": false 00:32:17.102 }, 00:32:17.102 "memory_domains": [ 00:32:17.102 { 00:32:17.102 "dma_device_id": "system", 00:32:17.102 "dma_device_type": 1 00:32:17.102 }, 00:32:17.102 { 00:32:17.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.102 "dma_device_type": 2 00:32:17.102 } 00:32:17.102 ], 00:32:17.102 "driver_specific": { 00:32:17.102 "passthru": { 00:32:17.102 "name": "pt1", 00:32:17.102 "base_bdev_name": "malloc1" 00:32:17.102 } 00:32:17.102 } 00:32:17.102 }' 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.102 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.360 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:17.360 16:49:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.360 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.360 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:17.360 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:17.360 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:17.360 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:17.618 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:17.618 "name": "pt2", 00:32:17.618 "aliases": [ 00:32:17.618 "00000000-0000-0000-0000-000000000002" 00:32:17.618 ], 00:32:17.618 "product_name": "passthru", 00:32:17.618 "block_size": 4096, 00:32:17.618 "num_blocks": 8192, 00:32:17.618 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:17.618 "md_size": 32, 00:32:17.618 "md_interleave": false, 00:32:17.618 "dif_type": 0, 00:32:17.618 "assigned_rate_limits": { 00:32:17.618 "rw_ios_per_sec": 0, 00:32:17.618 "rw_mbytes_per_sec": 0, 00:32:17.618 "r_mbytes_per_sec": 0, 00:32:17.618 "w_mbytes_per_sec": 0 00:32:17.618 }, 00:32:17.618 "claimed": true, 00:32:17.618 "claim_type": "exclusive_write", 00:32:17.618 "zoned": false, 00:32:17.618 "supported_io_types": { 00:32:17.618 "read": true, 00:32:17.618 "write": true, 00:32:17.618 "unmap": true, 00:32:17.618 "flush": true, 00:32:17.618 "reset": true, 00:32:17.618 "nvme_admin": false, 00:32:17.618 "nvme_io": false, 00:32:17.618 "nvme_io_md": false, 00:32:17.618 "write_zeroes": true, 00:32:17.618 "zcopy": true, 00:32:17.618 "get_zone_info": false, 00:32:17.618 "zone_management": false, 00:32:17.618 "zone_append": false, 00:32:17.618 "compare": false, 00:32:17.618 "compare_and_write": false, 00:32:17.618 "abort": true, 00:32:17.618 "seek_hole": false, 00:32:17.618 "seek_data": false, 00:32:17.618 "copy": true, 00:32:17.618 "nvme_iov_md": false 00:32:17.618 }, 00:32:17.618 "memory_domains": [ 00:32:17.618 { 00:32:17.618 "dma_device_id": "system", 00:32:17.619 "dma_device_type": 1 00:32:17.619 }, 00:32:17.619 { 00:32:17.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.619 "dma_device_type": 2 00:32:17.619 } 00:32:17.619 ], 00:32:17.619 "driver_specific": { 00:32:17.619 "passthru": { 00:32:17.619 "name": "pt2", 00:32:17.619 "base_bdev_name": "malloc2" 00:32:17.619 } 00:32:17.619 } 00:32:17.619 }' 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:17.619 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:17.878 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:32:18.136 [2024-07-24 16:49:14.856956] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:18.136 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 31fd48c4-a888-4f20-93aa-db1447275a91 '!=' 31fd48c4-a888-4f20-93aa-db1447275a91 ']' 00:32:18.136 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:32:18.136 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:18.136 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:18.136 16:49:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:18.394 [2024-07-24 16:49:15.085226] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:18.394 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:18.394 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.395 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.653 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.653 "name": "raid_bdev1", 00:32:18.653 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:18.653 "strip_size_kb": 0, 00:32:18.653 "state": "online", 00:32:18.653 "raid_level": "raid1", 00:32:18.653 "superblock": true, 00:32:18.653 "num_base_bdevs": 2, 00:32:18.653 "num_base_bdevs_discovered": 1, 00:32:18.653 "num_base_bdevs_operational": 1, 00:32:18.653 "base_bdevs_list": [ 00:32:18.653 { 00:32:18.653 "name": null, 00:32:18.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.653 "is_configured": false, 00:32:18.653 "data_offset": 256, 00:32:18.653 "data_size": 7936 00:32:18.653 }, 00:32:18.653 { 00:32:18.653 "name": "pt2", 00:32:18.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:18.653 "is_configured": true, 00:32:18.653 "data_offset": 256, 00:32:18.653 "data_size": 7936 00:32:18.653 } 00:32:18.653 ] 00:32:18.653 }' 00:32:18.653 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.653 16:49:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:19.220 16:49:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:19.479 [2024-07-24 16:49:16.111967] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:19.479 [2024-07-24 16:49:16.111998] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:19.479 [2024-07-24 16:49:16.112085] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:19.479 [2024-07-24 16:49:16.112153] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:19.479 [2024-07-24 16:49:16.112175] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:32:19.479 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.479 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:32:19.738 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:19.996 [2024-07-24 16:49:16.785789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:19.997 [2024-07-24 16:49:16.785870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:19.997 [2024-07-24 16:49:16.785894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:32:19.997 [2024-07-24 16:49:16.785912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:19.997 [2024-07-24 16:49:16.788490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:19.997 [2024-07-24 16:49:16.788527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:19.997 [2024-07-24 16:49:16.788589] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:19.997 [2024-07-24 16:49:16.788654] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:19.997 [2024-07-24 16:49:16.788811] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:32:19.997 [2024-07-24 16:49:16.788828] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:19.997 [2024-07-24 16:49:16.788914] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:32:19.997 [2024-07-24 16:49:16.789107] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:32:19.997 [2024-07-24 16:49:16.789121] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:32:19.997 [2024-07-24 16:49:16.789284] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:19.997 pt2 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.997 16:49:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.255 16:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.255 "name": "raid_bdev1", 00:32:20.255 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:20.255 "strip_size_kb": 0, 00:32:20.255 "state": "online", 00:32:20.255 "raid_level": "raid1", 00:32:20.255 "superblock": true, 00:32:20.255 "num_base_bdevs": 2, 00:32:20.255 "num_base_bdevs_discovered": 1, 00:32:20.255 "num_base_bdevs_operational": 1, 00:32:20.255 "base_bdevs_list": [ 00:32:20.255 { 00:32:20.255 "name": null, 00:32:20.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.256 "is_configured": false, 00:32:20.256 "data_offset": 256, 00:32:20.256 "data_size": 7936 00:32:20.256 }, 00:32:20.256 { 00:32:20.256 "name": "pt2", 00:32:20.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:20.256 "is_configured": true, 00:32:20.256 "data_offset": 256, 00:32:20.256 "data_size": 7936 00:32:20.256 } 00:32:20.256 ] 00:32:20.256 }' 00:32:20.256 16:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.256 16:49:17 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:20.823 16:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:21.081 [2024-07-24 16:49:17.820594] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:21.081 [2024-07-24 16:49:17.820627] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:21.081 [2024-07-24 16:49:17.820701] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:21.081 [2024-07-24 16:49:17.820761] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:21.081 [2024-07-24 16:49:17.820777] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:32:21.081 16:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.081 16:49:17 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:32:21.340 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:32:21.340 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:32:21.340 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:32:21.340 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:21.599 [2024-07-24 16:49:18.273947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:21.599 [2024-07-24 16:49:18.274010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:21.599 [2024-07-24 16:49:18.274037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:32:21.599 [2024-07-24 16:49:18.274052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:21.599 [2024-07-24 16:49:18.276627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:21.599 [2024-07-24 16:49:18.276660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:21.599 [2024-07-24 16:49:18.276727] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:21.599 [2024-07-24 16:49:18.276794] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:21.599 [2024-07-24 16:49:18.276985] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:21.599 [2024-07-24 16:49:18.277002] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:21.599 [2024-07-24 16:49:18.277027] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:32:21.599 [2024-07-24 16:49:18.277111] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:21.599 [2024-07-24 16:49:18.277207] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:32:21.599 [2024-07-24 16:49:18.277221] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:21.599 [2024-07-24 16:49:18.277301] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:32:21.599 [2024-07-24 16:49:18.277482] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:32:21.599 [2024-07-24 16:49:18.277498] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:32:21.599 [2024-07-24 16:49:18.277627] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.599 pt1 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.599 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.858 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.858 "name": "raid_bdev1", 00:32:21.858 "uuid": "31fd48c4-a888-4f20-93aa-db1447275a91", 00:32:21.858 "strip_size_kb": 0, 00:32:21.858 "state": "online", 00:32:21.858 "raid_level": "raid1", 00:32:21.858 "superblock": true, 00:32:21.858 "num_base_bdevs": 2, 00:32:21.858 "num_base_bdevs_discovered": 1, 00:32:21.858 "num_base_bdevs_operational": 1, 00:32:21.858 "base_bdevs_list": [ 00:32:21.858 { 00:32:21.858 "name": null, 00:32:21.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.858 "is_configured": false, 00:32:21.858 "data_offset": 256, 00:32:21.858 "data_size": 7936 00:32:21.858 }, 00:32:21.858 { 00:32:21.858 "name": "pt2", 00:32:21.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:21.858 "is_configured": true, 00:32:21.858 "data_offset": 256, 00:32:21.858 "data_size": 7936 00:32:21.858 } 00:32:21.858 ] 00:32:21.858 }' 00:32:21.858 16:49:18 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.858 16:49:18 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.426 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:22.426 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:22.685 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:32:22.685 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:22.685 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:32:22.685 [2024-07-24 16:49:19.529653] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 31fd48c4-a888-4f20-93aa-db1447275a91 '!=' 31fd48c4-a888-4f20-93aa-db1447275a91 ']' 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 1795455 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 1795455 ']' 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 1795455 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1795455 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1795455' 00:32:22.944 killing process with pid 1795455 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 1795455 00:32:22.944 [2024-07-24 16:49:19.609213] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:22.944 [2024-07-24 16:49:19.609310] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:22.944 [2024-07-24 16:49:19.609367] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:22.944 [2024-07-24 16:49:19.609385] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:32:22.944 16:49:19 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 1795455 00:32:23.202 [2024-07-24 16:49:19.911952] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:25.105 16:49:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:32:25.105 00:32:25.105 real 0m17.644s 00:32:25.105 user 0m30.126s 00:32:25.105 sys 0m3.043s 00:32:25.105 16:49:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:25.105 16:49:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.105 ************************************ 00:32:25.105 END TEST raid_superblock_test_md_separate 00:32:25.105 ************************************ 00:32:25.105 16:49:21 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:32:25.105 16:49:21 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:32:25.105 16:49:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:25.105 16:49:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:25.105 16:49:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:25.105 ************************************ 00:32:25.105 START TEST raid_rebuild_test_sb_md_separate 00:32:25.105 ************************************ 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=1798671 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 1798671 /var/tmp/spdk-raid.sock 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 1798671 ']' 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:25.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.105 16:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.105 [2024-07-24 16:49:21.823103] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:32:25.105 [2024-07-24 16:49:21.823235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798671 ] 00:32:25.105 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:25.105 Zero copy mechanism will not be used. 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:25.105 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:25.105 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:25.363 [2024-07-24 16:49:22.047701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.621 [2024-07-24 16:49:22.332143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.879 [2024-07-24 16:49:22.659274] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:25.879 [2024-07-24 16:49:22.659309] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:26.137 16:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:26.137 16:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:26.137 16:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:26.137 16:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:32:26.397 BaseBdev1_malloc 00:32:26.397 16:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:26.658 [2024-07-24 16:49:23.342426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:26.658 [2024-07-24 16:49:23.342496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:26.658 [2024-07-24 16:49:23.342526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:32:26.658 [2024-07-24 16:49:23.342546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:26.658 [2024-07-24 16:49:23.345038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:26.658 [2024-07-24 16:49:23.345075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:26.659 BaseBdev1 00:32:26.659 16:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:26.659 16:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:32:26.922 BaseBdev2_malloc 00:32:26.922 16:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:27.180 [2024-07-24 16:49:23.848648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:27.180 [2024-07-24 16:49:23.848713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.180 [2024-07-24 16:49:23.848741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:32:27.180 [2024-07-24 16:49:23.848763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.180 [2024-07-24 16:49:23.851255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.180 [2024-07-24 16:49:23.851293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:27.180 BaseBdev2 00:32:27.180 16:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:32:27.437 spare_malloc 00:32:27.437 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:27.696 spare_delay 00:32:27.696 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:27.955 [2024-07-24 16:49:24.564500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:27.955 [2024-07-24 16:49:24.564560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.955 [2024-07-24 16:49:24.564594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:32:27.955 [2024-07-24 16:49:24.564613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.955 [2024-07-24 16:49:24.567099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.955 [2024-07-24 16:49:24.567137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:27.955 spare 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:27.955 [2024-07-24 16:49:24.789165] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:27.955 [2024-07-24 16:49:24.791506] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:27.955 [2024-07-24 16:49:24.791752] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:32:27.955 [2024-07-24 16:49:24.791774] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:27.955 [2024-07-24 16:49:24.791886] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:32:27.955 [2024-07-24 16:49:24.792102] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:32:27.955 [2024-07-24 16:49:24.792116] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:32:27.955 [2024-07-24 16:49:24.792283] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.955 16:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.213 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:28.213 "name": "raid_bdev1", 00:32:28.213 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:28.213 "strip_size_kb": 0, 00:32:28.213 "state": "online", 00:32:28.213 "raid_level": "raid1", 00:32:28.213 "superblock": true, 00:32:28.213 "num_base_bdevs": 2, 00:32:28.213 "num_base_bdevs_discovered": 2, 00:32:28.213 "num_base_bdevs_operational": 2, 00:32:28.213 "base_bdevs_list": [ 00:32:28.213 { 00:32:28.213 "name": "BaseBdev1", 00:32:28.213 "uuid": "d12b6ced-f2e7-5582-b6d3-bcc6eb176b22", 00:32:28.213 "is_configured": true, 00:32:28.213 "data_offset": 256, 00:32:28.213 "data_size": 7936 00:32:28.213 }, 00:32:28.213 { 00:32:28.213 "name": "BaseBdev2", 00:32:28.213 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:28.213 "is_configured": true, 00:32:28.213 "data_offset": 256, 00:32:28.213 "data_size": 7936 00:32:28.213 } 00:32:28.213 ] 00:32:28.213 }' 00:32:28.213 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:28.213 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:28.839 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:28.839 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:32:29.098 [2024-07-24 16:49:25.816249] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:29.098 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:32:29.098 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.098 16:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:29.357 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:29.615 [2024-07-24 16:49:26.281214] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:32:29.615 /dev/nbd0 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:29.615 1+0 records in 00:32:29.615 1+0 records out 00:32:29.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251744 s, 16.3 MB/s 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:32:29.615 16:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:30.551 7936+0 records in 00:32:30.551 7936+0 records out 00:32:30.551 32505856 bytes (33 MB, 31 MiB) copied, 0.794121 s, 40.9 MB/s 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:30.551 [2024-07-24 16:49:27.388257] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:30.551 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:30.808 [2024-07-24 16:49:27.604969] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.808 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:31.066 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:31.066 "name": "raid_bdev1", 00:32:31.066 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:31.066 "strip_size_kb": 0, 00:32:31.066 "state": "online", 00:32:31.066 "raid_level": "raid1", 00:32:31.066 "superblock": true, 00:32:31.066 "num_base_bdevs": 2, 00:32:31.066 "num_base_bdevs_discovered": 1, 00:32:31.066 "num_base_bdevs_operational": 1, 00:32:31.066 "base_bdevs_list": [ 00:32:31.066 { 00:32:31.066 "name": null, 00:32:31.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:31.066 "is_configured": false, 00:32:31.066 "data_offset": 256, 00:32:31.066 "data_size": 7936 00:32:31.066 }, 00:32:31.066 { 00:32:31.066 "name": "BaseBdev2", 00:32:31.066 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:31.066 "is_configured": true, 00:32:31.066 "data_offset": 256, 00:32:31.066 "data_size": 7936 00:32:31.066 } 00:32:31.066 ] 00:32:31.066 }' 00:32:31.066 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:31.066 16:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:31.630 16:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:31.888 [2024-07-24 16:49:28.639747] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:31.888 [2024-07-24 16:49:28.665257] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:32:31.888 [2024-07-24 16:49:28.667571] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:31.888 16:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:33.258 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:33.258 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:33.258 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:33.258 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:33.258 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:33.258 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.259 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.259 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:33.259 "name": "raid_bdev1", 00:32:33.259 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:33.259 "strip_size_kb": 0, 00:32:33.259 "state": "online", 00:32:33.259 "raid_level": "raid1", 00:32:33.259 "superblock": true, 00:32:33.259 "num_base_bdevs": 2, 00:32:33.259 "num_base_bdevs_discovered": 2, 00:32:33.259 "num_base_bdevs_operational": 2, 00:32:33.259 "process": { 00:32:33.259 "type": "rebuild", 00:32:33.259 "target": "spare", 00:32:33.259 "progress": { 00:32:33.259 "blocks": 3072, 00:32:33.259 "percent": 38 00:32:33.259 } 00:32:33.259 }, 00:32:33.259 "base_bdevs_list": [ 00:32:33.259 { 00:32:33.259 "name": "spare", 00:32:33.259 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:33.259 "is_configured": true, 00:32:33.259 "data_offset": 256, 00:32:33.259 "data_size": 7936 00:32:33.259 }, 00:32:33.259 { 00:32:33.259 "name": "BaseBdev2", 00:32:33.259 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:33.259 "is_configured": true, 00:32:33.259 "data_offset": 256, 00:32:33.259 "data_size": 7936 00:32:33.259 } 00:32:33.259 ] 00:32:33.259 }' 00:32:33.259 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:33.259 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:33.259 16:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:33.259 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:33.259 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:33.516 [2024-07-24 16:49:30.224698] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:33.516 [2024-07-24 16:49:30.280680] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:33.516 [2024-07-24 16:49:30.280756] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:33.516 [2024-07-24 16:49:30.280778] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:33.516 [2024-07-24 16:49:30.280797] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.516 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.774 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.774 "name": "raid_bdev1", 00:32:33.774 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:33.774 "strip_size_kb": 0, 00:32:33.774 "state": "online", 00:32:33.774 "raid_level": "raid1", 00:32:33.774 "superblock": true, 00:32:33.774 "num_base_bdevs": 2, 00:32:33.774 "num_base_bdevs_discovered": 1, 00:32:33.774 "num_base_bdevs_operational": 1, 00:32:33.774 "base_bdevs_list": [ 00:32:33.774 { 00:32:33.774 "name": null, 00:32:33.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.774 "is_configured": false, 00:32:33.774 "data_offset": 256, 00:32:33.774 "data_size": 7936 00:32:33.774 }, 00:32:33.774 { 00:32:33.774 "name": "BaseBdev2", 00:32:33.774 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:33.774 "is_configured": true, 00:32:33.774 "data_offset": 256, 00:32:33.774 "data_size": 7936 00:32:33.774 } 00:32:33.774 ] 00:32:33.774 }' 00:32:33.774 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.774 16:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.339 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.597 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:34.597 "name": "raid_bdev1", 00:32:34.597 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:34.597 "strip_size_kb": 0, 00:32:34.597 "state": "online", 00:32:34.597 "raid_level": "raid1", 00:32:34.597 "superblock": true, 00:32:34.597 "num_base_bdevs": 2, 00:32:34.597 "num_base_bdevs_discovered": 1, 00:32:34.597 "num_base_bdevs_operational": 1, 00:32:34.597 "base_bdevs_list": [ 00:32:34.597 { 00:32:34.597 "name": null, 00:32:34.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.597 "is_configured": false, 00:32:34.597 "data_offset": 256, 00:32:34.597 "data_size": 7936 00:32:34.597 }, 00:32:34.597 { 00:32:34.597 "name": "BaseBdev2", 00:32:34.597 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:34.597 "is_configured": true, 00:32:34.597 "data_offset": 256, 00:32:34.597 "data_size": 7936 00:32:34.597 } 00:32:34.597 ] 00:32:34.597 }' 00:32:34.597 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:34.597 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:34.597 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:34.597 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:34.597 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:34.854 [2024-07-24 16:49:31.639965] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:34.854 [2024-07-24 16:49:31.664008] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:32:34.854 [2024-07-24 16:49:31.666364] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:34.854 16:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.226 "name": "raid_bdev1", 00:32:36.226 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:36.226 "strip_size_kb": 0, 00:32:36.226 "state": "online", 00:32:36.226 "raid_level": "raid1", 00:32:36.226 "superblock": true, 00:32:36.226 "num_base_bdevs": 2, 00:32:36.226 "num_base_bdevs_discovered": 2, 00:32:36.226 "num_base_bdevs_operational": 2, 00:32:36.226 "process": { 00:32:36.226 "type": "rebuild", 00:32:36.226 "target": "spare", 00:32:36.226 "progress": { 00:32:36.226 "blocks": 3072, 00:32:36.226 "percent": 38 00:32:36.226 } 00:32:36.226 }, 00:32:36.226 "base_bdevs_list": [ 00:32:36.226 { 00:32:36.226 "name": "spare", 00:32:36.226 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:36.226 "is_configured": true, 00:32:36.226 "data_offset": 256, 00:32:36.226 "data_size": 7936 00:32:36.226 }, 00:32:36.226 { 00:32:36.226 "name": "BaseBdev2", 00:32:36.226 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:36.226 "is_configured": true, 00:32:36.226 "data_offset": 256, 00:32:36.226 "data_size": 7936 00:32:36.226 } 00:32:36.226 ] 00:32:36.226 }' 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:36.226 16:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:32:36.226 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1177 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.226 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.483 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.483 "name": "raid_bdev1", 00:32:36.483 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:36.483 "strip_size_kb": 0, 00:32:36.483 "state": "online", 00:32:36.483 "raid_level": "raid1", 00:32:36.483 "superblock": true, 00:32:36.483 "num_base_bdevs": 2, 00:32:36.483 "num_base_bdevs_discovered": 2, 00:32:36.483 "num_base_bdevs_operational": 2, 00:32:36.483 "process": { 00:32:36.483 "type": "rebuild", 00:32:36.483 "target": "spare", 00:32:36.483 "progress": { 00:32:36.483 "blocks": 3840, 00:32:36.483 "percent": 48 00:32:36.483 } 00:32:36.483 }, 00:32:36.483 "base_bdevs_list": [ 00:32:36.483 { 00:32:36.483 "name": "spare", 00:32:36.483 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:36.483 "is_configured": true, 00:32:36.483 "data_offset": 256, 00:32:36.483 "data_size": 7936 00:32:36.483 }, 00:32:36.483 { 00:32:36.483 "name": "BaseBdev2", 00:32:36.483 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:36.483 "is_configured": true, 00:32:36.483 "data_offset": 256, 00:32:36.483 "data_size": 7936 00:32:36.483 } 00:32:36.483 ] 00:32:36.483 }' 00:32:36.483 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:36.483 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:36.483 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:36.483 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:36.483 16:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:37.853 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:37.853 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:37.853 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:37.853 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:37.854 "name": "raid_bdev1", 00:32:37.854 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:37.854 "strip_size_kb": 0, 00:32:37.854 "state": "online", 00:32:37.854 "raid_level": "raid1", 00:32:37.854 "superblock": true, 00:32:37.854 "num_base_bdevs": 2, 00:32:37.854 "num_base_bdevs_discovered": 2, 00:32:37.854 "num_base_bdevs_operational": 2, 00:32:37.854 "process": { 00:32:37.854 "type": "rebuild", 00:32:37.854 "target": "spare", 00:32:37.854 "progress": { 00:32:37.854 "blocks": 7168, 00:32:37.854 "percent": 90 00:32:37.854 } 00:32:37.854 }, 00:32:37.854 "base_bdevs_list": [ 00:32:37.854 { 00:32:37.854 "name": "spare", 00:32:37.854 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:37.854 "is_configured": true, 00:32:37.854 "data_offset": 256, 00:32:37.854 "data_size": 7936 00:32:37.854 }, 00:32:37.854 { 00:32:37.854 "name": "BaseBdev2", 00:32:37.854 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:37.854 "is_configured": true, 00:32:37.854 "data_offset": 256, 00:32:37.854 "data_size": 7936 00:32:37.854 } 00:32:37.854 ] 00:32:37.854 }' 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:37.854 16:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:38.111 [2024-07-24 16:49:34.791646] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:38.111 [2024-07-24 16:49:34.791721] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:38.111 [2024-07-24 16:49:34.791820] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:39.042 "name": "raid_bdev1", 00:32:39.042 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:39.042 "strip_size_kb": 0, 00:32:39.042 "state": "online", 00:32:39.042 "raid_level": "raid1", 00:32:39.042 "superblock": true, 00:32:39.042 "num_base_bdevs": 2, 00:32:39.042 "num_base_bdevs_discovered": 2, 00:32:39.042 "num_base_bdevs_operational": 2, 00:32:39.042 "base_bdevs_list": [ 00:32:39.042 { 00:32:39.042 "name": "spare", 00:32:39.042 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:39.042 "is_configured": true, 00:32:39.042 "data_offset": 256, 00:32:39.042 "data_size": 7936 00:32:39.042 }, 00:32:39.042 { 00:32:39.042 "name": "BaseBdev2", 00:32:39.042 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:39.042 "is_configured": true, 00:32:39.042 "data_offset": 256, 00:32:39.042 "data_size": 7936 00:32:39.042 } 00:32:39.042 ] 00:32:39.042 }' 00:32:39.042 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.299 16:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.299 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:39.299 "name": "raid_bdev1", 00:32:39.299 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:39.299 "strip_size_kb": 0, 00:32:39.299 "state": "online", 00:32:39.299 "raid_level": "raid1", 00:32:39.299 "superblock": true, 00:32:39.299 "num_base_bdevs": 2, 00:32:39.299 "num_base_bdevs_discovered": 2, 00:32:39.299 "num_base_bdevs_operational": 2, 00:32:39.299 "base_bdevs_list": [ 00:32:39.299 { 00:32:39.299 "name": "spare", 00:32:39.299 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:39.299 "is_configured": true, 00:32:39.299 "data_offset": 256, 00:32:39.299 "data_size": 7936 00:32:39.299 }, 00:32:39.299 { 00:32:39.299 "name": "BaseBdev2", 00:32:39.299 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:39.299 "is_configured": true, 00:32:39.299 "data_offset": 256, 00:32:39.299 "data_size": 7936 00:32:39.299 } 00:32:39.300 ] 00:32:39.300 }' 00:32:39.300 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:39.556 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:39.556 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:39.557 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.814 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:39.814 "name": "raid_bdev1", 00:32:39.814 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:39.814 "strip_size_kb": 0, 00:32:39.814 "state": "online", 00:32:39.814 "raid_level": "raid1", 00:32:39.814 "superblock": true, 00:32:39.814 "num_base_bdevs": 2, 00:32:39.814 "num_base_bdevs_discovered": 2, 00:32:39.814 "num_base_bdevs_operational": 2, 00:32:39.814 "base_bdevs_list": [ 00:32:39.814 { 00:32:39.814 "name": "spare", 00:32:39.814 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:39.814 "is_configured": true, 00:32:39.814 "data_offset": 256, 00:32:39.814 "data_size": 7936 00:32:39.814 }, 00:32:39.814 { 00:32:39.814 "name": "BaseBdev2", 00:32:39.814 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:39.814 "is_configured": true, 00:32:39.814 "data_offset": 256, 00:32:39.814 "data_size": 7936 00:32:39.814 } 00:32:39.814 ] 00:32:39.814 }' 00:32:39.814 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:39.814 16:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:40.378 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:40.378 [2024-07-24 16:49:37.226861] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:40.378 [2024-07-24 16:49:37.226904] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:40.378 [2024-07-24 16:49:37.226992] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:40.378 [2024-07-24 16:49:37.227068] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:40.378 [2024-07-24 16:49:37.227085] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:40.637 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:40.895 /dev/nbd0 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:40.895 1+0 records in 00:32:40.895 1+0 records out 00:32:40.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261106 s, 15.7 MB/s 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:40.895 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:40.896 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:40.896 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:40.896 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:40.896 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:41.154 /dev/nbd1 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:41.154 16:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:41.154 1+0 records in 00:32:41.154 1+0 records out 00:32:41.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329384 s, 12.4 MB/s 00:32:41.154 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:41.154 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:41.154 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:41.154 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:41.154 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:41.412 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:41.670 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:41.928 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:41.928 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:41.928 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:41.928 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:41.928 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:41.928 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:41.929 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:41.929 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:41.929 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:32:41.929 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:42.187 16:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:42.445 [2024-07-24 16:49:39.196215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:42.445 [2024-07-24 16:49:39.196275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.445 [2024-07-24 16:49:39.196305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:32:42.445 [2024-07-24 16:49:39.196320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.445 [2024-07-24 16:49:39.198848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.445 [2024-07-24 16:49:39.198882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:42.445 [2024-07-24 16:49:39.198957] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:42.445 [2024-07-24 16:49:39.199027] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:42.445 [2024-07-24 16:49:39.199225] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:42.445 spare 00:32:42.445 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:42.445 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:42.445 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:42.445 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:42.445 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.446 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.446 [2024-07-24 16:49:39.299591] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043880 00:32:42.446 [2024-07-24 16:49:39.299624] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:42.446 [2024-07-24 16:49:39.299727] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:32:42.446 [2024-07-24 16:49:39.299954] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043880 00:32:42.446 [2024-07-24 16:49:39.299969] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043880 00:32:42.446 [2024-07-24 16:49:39.300122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.704 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:42.704 "name": "raid_bdev1", 00:32:42.704 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:42.704 "strip_size_kb": 0, 00:32:42.704 "state": "online", 00:32:42.704 "raid_level": "raid1", 00:32:42.704 "superblock": true, 00:32:42.704 "num_base_bdevs": 2, 00:32:42.704 "num_base_bdevs_discovered": 2, 00:32:42.704 "num_base_bdevs_operational": 2, 00:32:42.704 "base_bdevs_list": [ 00:32:42.704 { 00:32:42.704 "name": "spare", 00:32:42.704 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:42.704 "is_configured": true, 00:32:42.704 "data_offset": 256, 00:32:42.704 "data_size": 7936 00:32:42.704 }, 00:32:42.704 { 00:32:42.704 "name": "BaseBdev2", 00:32:42.704 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:42.704 "is_configured": true, 00:32:42.704 "data_offset": 256, 00:32:42.704 "data_size": 7936 00:32:42.704 } 00:32:42.704 ] 00:32:42.704 }' 00:32:42.704 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:42.704 16:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.269 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:43.270 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:43.270 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:43.270 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:43.270 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:43.270 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.270 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:43.528 "name": "raid_bdev1", 00:32:43.528 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:43.528 "strip_size_kb": 0, 00:32:43.528 "state": "online", 00:32:43.528 "raid_level": "raid1", 00:32:43.528 "superblock": true, 00:32:43.528 "num_base_bdevs": 2, 00:32:43.528 "num_base_bdevs_discovered": 2, 00:32:43.528 "num_base_bdevs_operational": 2, 00:32:43.528 "base_bdevs_list": [ 00:32:43.528 { 00:32:43.528 "name": "spare", 00:32:43.528 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:43.528 "is_configured": true, 00:32:43.528 "data_offset": 256, 00:32:43.528 "data_size": 7936 00:32:43.528 }, 00:32:43.528 { 00:32:43.528 "name": "BaseBdev2", 00:32:43.528 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:43.528 "is_configured": true, 00:32:43.528 "data_offset": 256, 00:32:43.528 "data_size": 7936 00:32:43.528 } 00:32:43.528 ] 00:32:43.528 }' 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.528 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:43.786 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.786 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:44.080 [2024-07-24 16:49:40.780676] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.080 16:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.339 16:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:44.339 "name": "raid_bdev1", 00:32:44.339 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:44.339 "strip_size_kb": 0, 00:32:44.339 "state": "online", 00:32:44.339 "raid_level": "raid1", 00:32:44.339 "superblock": true, 00:32:44.339 "num_base_bdevs": 2, 00:32:44.339 "num_base_bdevs_discovered": 1, 00:32:44.339 "num_base_bdevs_operational": 1, 00:32:44.339 "base_bdevs_list": [ 00:32:44.339 { 00:32:44.339 "name": null, 00:32:44.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.339 "is_configured": false, 00:32:44.339 "data_offset": 256, 00:32:44.339 "data_size": 7936 00:32:44.339 }, 00:32:44.339 { 00:32:44.339 "name": "BaseBdev2", 00:32:44.339 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:44.339 "is_configured": true, 00:32:44.339 "data_offset": 256, 00:32:44.339 "data_size": 7936 00:32:44.339 } 00:32:44.339 ] 00:32:44.339 }' 00:32:44.339 16:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:44.339 16:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:44.906 16:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:45.164 [2024-07-24 16:49:41.911741] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:45.164 [2024-07-24 16:49:41.911942] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:45.164 [2024-07-24 16:49:41.911969] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:45.164 [2024-07-24 16:49:41.912009] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:45.164 [2024-07-24 16:49:41.935179] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:32:45.164 [2024-07-24 16:49:41.937496] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:45.164 16:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:32:46.098 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:46.098 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:46.098 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:46.098 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:46.098 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:46.356 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.356 16:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.356 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.356 "name": "raid_bdev1", 00:32:46.356 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:46.356 "strip_size_kb": 0, 00:32:46.356 "state": "online", 00:32:46.356 "raid_level": "raid1", 00:32:46.356 "superblock": true, 00:32:46.356 "num_base_bdevs": 2, 00:32:46.356 "num_base_bdevs_discovered": 2, 00:32:46.356 "num_base_bdevs_operational": 2, 00:32:46.356 "process": { 00:32:46.356 "type": "rebuild", 00:32:46.356 "target": "spare", 00:32:46.356 "progress": { 00:32:46.356 "blocks": 2816, 00:32:46.356 "percent": 35 00:32:46.356 } 00:32:46.356 }, 00:32:46.356 "base_bdevs_list": [ 00:32:46.356 { 00:32:46.356 "name": "spare", 00:32:46.356 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:46.356 "is_configured": true, 00:32:46.356 "data_offset": 256, 00:32:46.356 "data_size": 7936 00:32:46.356 }, 00:32:46.356 { 00:32:46.356 "name": "BaseBdev2", 00:32:46.356 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:46.356 "is_configured": true, 00:32:46.356 "data_offset": 256, 00:32:46.356 "data_size": 7936 00:32:46.356 } 00:32:46.356 ] 00:32:46.356 }' 00:32:46.356 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:46.356 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:46.356 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:46.618 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:46.618 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:46.618 [2024-07-24 16:49:43.435390] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:46.618 [2024-07-24 16:49:43.449807] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:46.618 [2024-07-24 16:49:43.449871] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:46.618 [2024-07-24 16:49:43.449892] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:46.618 [2024-07-24 16:49:43.449907] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:46.878 "name": "raid_bdev1", 00:32:46.878 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:46.878 "strip_size_kb": 0, 00:32:46.878 "state": "online", 00:32:46.878 "raid_level": "raid1", 00:32:46.878 "superblock": true, 00:32:46.878 "num_base_bdevs": 2, 00:32:46.878 "num_base_bdevs_discovered": 1, 00:32:46.878 "num_base_bdevs_operational": 1, 00:32:46.878 "base_bdevs_list": [ 00:32:46.878 { 00:32:46.878 "name": null, 00:32:46.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:46.878 "is_configured": false, 00:32:46.878 "data_offset": 256, 00:32:46.878 "data_size": 7936 00:32:46.878 }, 00:32:46.878 { 00:32:46.878 "name": "BaseBdev2", 00:32:46.878 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:46.878 "is_configured": true, 00:32:46.878 "data_offset": 256, 00:32:46.878 "data_size": 7936 00:32:46.878 } 00:32:46.878 ] 00:32:46.878 }' 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:46.878 16:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:47.443 16:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:47.701 [2024-07-24 16:49:44.452829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:47.701 [2024-07-24 16:49:44.452904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:47.701 [2024-07-24 16:49:44.452931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:32:47.701 [2024-07-24 16:49:44.452948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:47.701 [2024-07-24 16:49:44.453272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:47.701 [2024-07-24 16:49:44.453298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:47.701 [2024-07-24 16:49:44.453371] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:47.701 [2024-07-24 16:49:44.453392] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:47.701 [2024-07-24 16:49:44.453411] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:47.701 [2024-07-24 16:49:44.453445] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:47.701 [2024-07-24 16:49:44.476269] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:32:47.701 spare 00:32:47.701 [2024-07-24 16:49:44.478611] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:47.701 16:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:49.075 "name": "raid_bdev1", 00:32:49.075 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:49.075 "strip_size_kb": 0, 00:32:49.075 "state": "online", 00:32:49.075 "raid_level": "raid1", 00:32:49.075 "superblock": true, 00:32:49.075 "num_base_bdevs": 2, 00:32:49.075 "num_base_bdevs_discovered": 2, 00:32:49.075 "num_base_bdevs_operational": 2, 00:32:49.075 "process": { 00:32:49.075 "type": "rebuild", 00:32:49.075 "target": "spare", 00:32:49.075 "progress": { 00:32:49.075 "blocks": 3072, 00:32:49.075 "percent": 38 00:32:49.075 } 00:32:49.075 }, 00:32:49.075 "base_bdevs_list": [ 00:32:49.075 { 00:32:49.075 "name": "spare", 00:32:49.075 "uuid": "44af9c3f-9438-5649-8e07-aba15802400d", 00:32:49.075 "is_configured": true, 00:32:49.075 "data_offset": 256, 00:32:49.075 "data_size": 7936 00:32:49.075 }, 00:32:49.075 { 00:32:49.075 "name": "BaseBdev2", 00:32:49.075 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:49.075 "is_configured": true, 00:32:49.075 "data_offset": 256, 00:32:49.075 "data_size": 7936 00:32:49.075 } 00:32:49.075 ] 00:32:49.075 }' 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:49.075 16:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:49.333 [2024-07-24 16:49:46.031693] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:49.333 [2024-07-24 16:49:46.091774] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:49.333 [2024-07-24 16:49:46.091831] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.333 [2024-07-24 16:49:46.091856] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:49.333 [2024-07-24 16:49:46.091868] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.333 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.592 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:49.592 "name": "raid_bdev1", 00:32:49.592 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:49.592 "strip_size_kb": 0, 00:32:49.592 "state": "online", 00:32:49.592 "raid_level": "raid1", 00:32:49.592 "superblock": true, 00:32:49.592 "num_base_bdevs": 2, 00:32:49.592 "num_base_bdevs_discovered": 1, 00:32:49.592 "num_base_bdevs_operational": 1, 00:32:49.592 "base_bdevs_list": [ 00:32:49.592 { 00:32:49.592 "name": null, 00:32:49.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:49.592 "is_configured": false, 00:32:49.592 "data_offset": 256, 00:32:49.592 "data_size": 7936 00:32:49.592 }, 00:32:49.592 { 00:32:49.592 "name": "BaseBdev2", 00:32:49.592 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:49.592 "is_configured": true, 00:32:49.592 "data_offset": 256, 00:32:49.592 "data_size": 7936 00:32:49.592 } 00:32:49.592 ] 00:32:49.592 }' 00:32:49.592 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:49.592 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.159 16:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.419 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.419 "name": "raid_bdev1", 00:32:50.419 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:50.419 "strip_size_kb": 0, 00:32:50.419 "state": "online", 00:32:50.419 "raid_level": "raid1", 00:32:50.419 "superblock": true, 00:32:50.419 "num_base_bdevs": 2, 00:32:50.419 "num_base_bdevs_discovered": 1, 00:32:50.419 "num_base_bdevs_operational": 1, 00:32:50.419 "base_bdevs_list": [ 00:32:50.419 { 00:32:50.419 "name": null, 00:32:50.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:50.419 "is_configured": false, 00:32:50.419 "data_offset": 256, 00:32:50.419 "data_size": 7936 00:32:50.419 }, 00:32:50.419 { 00:32:50.419 "name": "BaseBdev2", 00:32:50.419 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:50.419 "is_configured": true, 00:32:50.419 "data_offset": 256, 00:32:50.419 "data_size": 7936 00:32:50.419 } 00:32:50.419 ] 00:32:50.419 }' 00:32:50.419 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:50.419 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:50.419 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:50.419 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:50.419 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:50.678 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:50.937 [2024-07-24 16:49:47.632835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:50.937 [2024-07-24 16:49:47.632896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:50.937 [2024-07-24 16:49:47.632929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:32:50.937 [2024-07-24 16:49:47.632945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:50.937 [2024-07-24 16:49:47.633274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:50.937 [2024-07-24 16:49:47.633296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:50.937 [2024-07-24 16:49:47.633361] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:50.937 [2024-07-24 16:49:47.633382] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:50.937 [2024-07-24 16:49:47.633398] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:50.937 BaseBdev1 00:32:50.937 16:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.874 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.134 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.134 "name": "raid_bdev1", 00:32:52.134 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:52.134 "strip_size_kb": 0, 00:32:52.134 "state": "online", 00:32:52.134 "raid_level": "raid1", 00:32:52.134 "superblock": true, 00:32:52.134 "num_base_bdevs": 2, 00:32:52.134 "num_base_bdevs_discovered": 1, 00:32:52.134 "num_base_bdevs_operational": 1, 00:32:52.134 "base_bdevs_list": [ 00:32:52.134 { 00:32:52.134 "name": null, 00:32:52.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.134 "is_configured": false, 00:32:52.134 "data_offset": 256, 00:32:52.134 "data_size": 7936 00:32:52.134 }, 00:32:52.134 { 00:32:52.134 "name": "BaseBdev2", 00:32:52.134 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:52.134 "is_configured": true, 00:32:52.134 "data_offset": 256, 00:32:52.134 "data_size": 7936 00:32:52.134 } 00:32:52.134 ] 00:32:52.134 }' 00:32:52.134 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.134 16:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.702 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:52.961 "name": "raid_bdev1", 00:32:52.961 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:52.961 "strip_size_kb": 0, 00:32:52.961 "state": "online", 00:32:52.961 "raid_level": "raid1", 00:32:52.961 "superblock": true, 00:32:52.961 "num_base_bdevs": 2, 00:32:52.961 "num_base_bdevs_discovered": 1, 00:32:52.961 "num_base_bdevs_operational": 1, 00:32:52.961 "base_bdevs_list": [ 00:32:52.961 { 00:32:52.961 "name": null, 00:32:52.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.961 "is_configured": false, 00:32:52.961 "data_offset": 256, 00:32:52.961 "data_size": 7936 00:32:52.961 }, 00:32:52.961 { 00:32:52.961 "name": "BaseBdev2", 00:32:52.961 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:52.961 "is_configured": true, 00:32:52.961 "data_offset": 256, 00:32:52.961 "data_size": 7936 00:32:52.961 } 00:32:52.961 ] 00:32:52.961 }' 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:52.961 16:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:53.220 [2024-07-24 16:49:49.999290] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:53.220 [2024-07-24 16:49:49.999462] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:53.220 [2024-07-24 16:49:49.999482] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:53.220 request: 00:32:53.220 { 00:32:53.220 "base_bdev": "BaseBdev1", 00:32:53.220 "raid_bdev": "raid_bdev1", 00:32:53.220 "method": "bdev_raid_add_base_bdev", 00:32:53.220 "req_id": 1 00:32:53.220 } 00:32:53.220 Got JSON-RPC error response 00:32:53.220 response: 00:32:53.220 { 00:32:53.220 "code": -22, 00:32:53.220 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:53.220 } 00:32:53.220 16:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:32:53.220 16:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:53.220 16:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:53.220 16:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:53.220 16:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.598 "name": "raid_bdev1", 00:32:54.598 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:54.598 "strip_size_kb": 0, 00:32:54.598 "state": "online", 00:32:54.598 "raid_level": "raid1", 00:32:54.598 "superblock": true, 00:32:54.598 "num_base_bdevs": 2, 00:32:54.598 "num_base_bdevs_discovered": 1, 00:32:54.598 "num_base_bdevs_operational": 1, 00:32:54.598 "base_bdevs_list": [ 00:32:54.598 { 00:32:54.598 "name": null, 00:32:54.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.598 "is_configured": false, 00:32:54.598 "data_offset": 256, 00:32:54.598 "data_size": 7936 00:32:54.598 }, 00:32:54.598 { 00:32:54.598 "name": "BaseBdev2", 00:32:54.598 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:54.598 "is_configured": true, 00:32:54.598 "data_offset": 256, 00:32:54.598 "data_size": 7936 00:32:54.598 } 00:32:54.598 ] 00:32:54.598 }' 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.598 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.165 16:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:55.425 "name": "raid_bdev1", 00:32:55.425 "uuid": "4ff0c07a-ea19-41c8-9537-5f20c5394546", 00:32:55.425 "strip_size_kb": 0, 00:32:55.425 "state": "online", 00:32:55.425 "raid_level": "raid1", 00:32:55.425 "superblock": true, 00:32:55.425 "num_base_bdevs": 2, 00:32:55.425 "num_base_bdevs_discovered": 1, 00:32:55.425 "num_base_bdevs_operational": 1, 00:32:55.425 "base_bdevs_list": [ 00:32:55.425 { 00:32:55.425 "name": null, 00:32:55.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:55.425 "is_configured": false, 00:32:55.425 "data_offset": 256, 00:32:55.425 "data_size": 7936 00:32:55.425 }, 00:32:55.425 { 00:32:55.425 "name": "BaseBdev2", 00:32:55.425 "uuid": "570ae34a-67fc-50da-a0ec-7ba9acc1ed53", 00:32:55.425 "is_configured": true, 00:32:55.425 "data_offset": 256, 00:32:55.425 "data_size": 7936 00:32:55.425 } 00:32:55.425 ] 00:32:55.425 }' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 1798671 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 1798671 ']' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 1798671 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1798671 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1798671' 00:32:55.425 killing process with pid 1798671 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 1798671 00:32:55.425 Received shutdown signal, test time was about 60.000000 seconds 00:32:55.425 00:32:55.425 Latency(us) 00:32:55.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.425 =================================================================================================================== 00:32:55.425 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:55.425 [2024-07-24 16:49:52.256320] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:55.425 [2024-07-24 16:49:52.256458] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:55.425 16:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 1798671 00:32:55.425 [2024-07-24 16:49:52.256516] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:55.425 [2024-07-24 16:49:52.256532] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043880 name raid_bdev1, state offline 00:32:55.993 [2024-07-24 16:49:52.678351] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:57.899 16:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:32:57.899 00:32:57.899 real 0m32.626s 00:32:57.899 user 0m48.965s 00:32:57.899 sys 0m5.066s 00:32:57.899 16:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:57.899 16:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 ************************************ 00:32:57.899 END TEST raid_rebuild_test_sb_md_separate 00:32:57.899 ************************************ 00:32:57.899 16:49:54 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:32:57.899 16:49:54 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:32:57.899 16:49:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:57.899 16:49:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:57.899 16:49:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:57.899 ************************************ 00:32:57.899 START TEST raid_state_function_test_sb_md_interleaved 00:32:57.899 ************************************ 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.899 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=1804415 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 1804415' 00:32:57.900 Process raid pid: 1804415 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 1804415 /var/tmp/spdk-raid.sock 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 1804415 ']' 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:57.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:57.900 16:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:57.900 [2024-07-24 16:49:54.524441] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:32:57.900 [2024-07-24 16:49:54.524555] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:57.900 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:57.900 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:57.900 [2024-07-24 16:49:54.751790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.466 [2024-07-24 16:49:55.047077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:58.724 [2024-07-24 16:49:55.395310] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.724 [2024-07-24 16:49:55.395346] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:58.983 [2024-07-24 16:49:55.796585] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:58.983 [2024-07-24 16:49:55.796640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:58.983 [2024-07-24 16:49:55.796655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:58.983 [2024-07-24 16:49:55.796671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.983 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.241 16:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.241 "name": "Existed_Raid", 00:32:59.241 "uuid": "3faf5212-aeec-4b3b-81c0-d29e48400632", 00:32:59.241 "strip_size_kb": 0, 00:32:59.241 "state": "configuring", 00:32:59.241 "raid_level": "raid1", 00:32:59.241 "superblock": true, 00:32:59.241 "num_base_bdevs": 2, 00:32:59.241 "num_base_bdevs_discovered": 0, 00:32:59.241 "num_base_bdevs_operational": 2, 00:32:59.241 "base_bdevs_list": [ 00:32:59.241 { 00:32:59.241 "name": "BaseBdev1", 00:32:59.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.241 "is_configured": false, 00:32:59.241 "data_offset": 0, 00:32:59.241 "data_size": 0 00:32:59.241 }, 00:32:59.241 { 00:32:59.241 "name": "BaseBdev2", 00:32:59.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.241 "is_configured": false, 00:32:59.241 "data_offset": 0, 00:32:59.241 "data_size": 0 00:32:59.241 } 00:32:59.241 ] 00:32:59.241 }' 00:32:59.241 16:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.241 16:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:32:59.846 16:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:00.124 [2024-07-24 16:49:56.759034] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:00.124 [2024-07-24 16:49:56.759073] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f380 name Existed_Raid, state configuring 00:33:00.124 16:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:00.383 [2024-07-24 16:49:56.987677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:00.383 [2024-07-24 16:49:56.987723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:00.383 [2024-07-24 16:49:56.987736] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:00.383 [2024-07-24 16:49:56.987752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:00.383 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:33:00.641 [2024-07-24 16:49:57.273049] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:00.641 BaseBdev1 00:33:00.641 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:00.641 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:00.641 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:00.641 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:33:00.641 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:00.641 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:00.642 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:00.901 [ 00:33:00.901 { 00:33:00.901 "name": "BaseBdev1", 00:33:00.901 "aliases": [ 00:33:00.901 "319cf3c1-5f24-4920-bfdd-2975c081ed59" 00:33:00.901 ], 00:33:00.901 "product_name": "Malloc disk", 00:33:00.901 "block_size": 4128, 00:33:00.901 "num_blocks": 8192, 00:33:00.901 "uuid": "319cf3c1-5f24-4920-bfdd-2975c081ed59", 00:33:00.901 "md_size": 32, 00:33:00.901 "md_interleave": true, 00:33:00.901 "dif_type": 0, 00:33:00.901 "assigned_rate_limits": { 00:33:00.901 "rw_ios_per_sec": 0, 00:33:00.901 "rw_mbytes_per_sec": 0, 00:33:00.901 "r_mbytes_per_sec": 0, 00:33:00.901 "w_mbytes_per_sec": 0 00:33:00.901 }, 00:33:00.901 "claimed": true, 00:33:00.901 "claim_type": "exclusive_write", 00:33:00.901 "zoned": false, 00:33:00.901 "supported_io_types": { 00:33:00.901 "read": true, 00:33:00.901 "write": true, 00:33:00.901 "unmap": true, 00:33:00.901 "flush": true, 00:33:00.901 "reset": true, 00:33:00.901 "nvme_admin": false, 00:33:00.901 "nvme_io": false, 00:33:00.901 "nvme_io_md": false, 00:33:00.901 "write_zeroes": true, 00:33:00.901 "zcopy": true, 00:33:00.901 "get_zone_info": false, 00:33:00.901 "zone_management": false, 00:33:00.901 "zone_append": false, 00:33:00.901 "compare": false, 00:33:00.901 "compare_and_write": false, 00:33:00.901 "abort": true, 00:33:00.901 "seek_hole": false, 00:33:00.901 "seek_data": false, 00:33:00.901 "copy": true, 00:33:00.901 "nvme_iov_md": false 00:33:00.901 }, 00:33:00.901 "memory_domains": [ 00:33:00.901 { 00:33:00.901 "dma_device_id": "system", 00:33:00.901 "dma_device_type": 1 00:33:00.901 }, 00:33:00.901 { 00:33:00.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.901 "dma_device_type": 2 00:33:00.901 } 00:33:00.901 ], 00:33:00.901 "driver_specific": {} 00:33:00.901 } 00:33:00.901 ] 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.901 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:01.160 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:01.160 "name": "Existed_Raid", 00:33:01.160 "uuid": "e7c8f15d-be57-492d-a3fa-5efa17080201", 00:33:01.160 "strip_size_kb": 0, 00:33:01.160 "state": "configuring", 00:33:01.160 "raid_level": "raid1", 00:33:01.160 "superblock": true, 00:33:01.160 "num_base_bdevs": 2, 00:33:01.160 "num_base_bdevs_discovered": 1, 00:33:01.160 "num_base_bdevs_operational": 2, 00:33:01.160 "base_bdevs_list": [ 00:33:01.160 { 00:33:01.160 "name": "BaseBdev1", 00:33:01.160 "uuid": "319cf3c1-5f24-4920-bfdd-2975c081ed59", 00:33:01.160 "is_configured": true, 00:33:01.161 "data_offset": 256, 00:33:01.161 "data_size": 7936 00:33:01.161 }, 00:33:01.161 { 00:33:01.161 "name": "BaseBdev2", 00:33:01.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.161 "is_configured": false, 00:33:01.161 "data_offset": 0, 00:33:01.161 "data_size": 0 00:33:01.161 } 00:33:01.161 ] 00:33:01.161 }' 00:33:01.161 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:01.161 16:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:02.095 16:49:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:02.095 [2024-07-24 16:49:58.837390] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:02.095 [2024-07-24 16:49:58.837443] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003f680 name Existed_Raid, state configuring 00:33:02.095 16:49:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:02.354 [2024-07-24 16:49:59.058051] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:02.354 [2024-07-24 16:49:59.060333] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:02.354 [2024-07-24 16:49:59.060373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.354 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:02.613 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.613 "name": "Existed_Raid", 00:33:02.613 "uuid": "e2a8f2ce-9deb-42c2-b0dd-dc88b82948d4", 00:33:02.613 "strip_size_kb": 0, 00:33:02.613 "state": "configuring", 00:33:02.613 "raid_level": "raid1", 00:33:02.613 "superblock": true, 00:33:02.613 "num_base_bdevs": 2, 00:33:02.613 "num_base_bdevs_discovered": 1, 00:33:02.613 "num_base_bdevs_operational": 2, 00:33:02.613 "base_bdevs_list": [ 00:33:02.613 { 00:33:02.613 "name": "BaseBdev1", 00:33:02.613 "uuid": "319cf3c1-5f24-4920-bfdd-2975c081ed59", 00:33:02.613 "is_configured": true, 00:33:02.613 "data_offset": 256, 00:33:02.613 "data_size": 7936 00:33:02.613 }, 00:33:02.613 { 00:33:02.613 "name": "BaseBdev2", 00:33:02.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.613 "is_configured": false, 00:33:02.613 "data_offset": 0, 00:33:02.613 "data_size": 0 00:33:02.613 } 00:33:02.613 ] 00:33:02.613 }' 00:33:02.613 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.613 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:03.178 16:49:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:03.437 [2024-07-24 16:50:00.148164] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:03.437 [2024-07-24 16:50:00.148400] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600003ff80 00:33:03.437 [2024-07-24 16:50:00.148419] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:03.437 [2024-07-24 16:50:00.148519] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:33:03.437 [2024-07-24 16:50:00.148654] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600003ff80 00:33:03.437 [2024-07-24 16:50:00.148671] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x61600003ff80 00:33:03.437 [2024-07-24 16:50:00.148762] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.437 BaseBdev2 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:03.437 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:03.695 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:03.954 [ 00:33:03.954 { 00:33:03.954 "name": "BaseBdev2", 00:33:03.954 "aliases": [ 00:33:03.954 "c571efca-6abc-47c2-801a-5a973076f9ce" 00:33:03.954 ], 00:33:03.954 "product_name": "Malloc disk", 00:33:03.954 "block_size": 4128, 00:33:03.954 "num_blocks": 8192, 00:33:03.954 "uuid": "c571efca-6abc-47c2-801a-5a973076f9ce", 00:33:03.954 "md_size": 32, 00:33:03.954 "md_interleave": true, 00:33:03.954 "dif_type": 0, 00:33:03.954 "assigned_rate_limits": { 00:33:03.954 "rw_ios_per_sec": 0, 00:33:03.954 "rw_mbytes_per_sec": 0, 00:33:03.954 "r_mbytes_per_sec": 0, 00:33:03.954 "w_mbytes_per_sec": 0 00:33:03.954 }, 00:33:03.954 "claimed": true, 00:33:03.954 "claim_type": "exclusive_write", 00:33:03.954 "zoned": false, 00:33:03.954 "supported_io_types": { 00:33:03.954 "read": true, 00:33:03.954 "write": true, 00:33:03.954 "unmap": true, 00:33:03.954 "flush": true, 00:33:03.954 "reset": true, 00:33:03.954 "nvme_admin": false, 00:33:03.954 "nvme_io": false, 00:33:03.954 "nvme_io_md": false, 00:33:03.954 "write_zeroes": true, 00:33:03.954 "zcopy": true, 00:33:03.954 "get_zone_info": false, 00:33:03.954 "zone_management": false, 00:33:03.954 "zone_append": false, 00:33:03.954 "compare": false, 00:33:03.954 "compare_and_write": false, 00:33:03.954 "abort": true, 00:33:03.954 "seek_hole": false, 00:33:03.954 "seek_data": false, 00:33:03.954 "copy": true, 00:33:03.954 "nvme_iov_md": false 00:33:03.954 }, 00:33:03.954 "memory_domains": [ 00:33:03.954 { 00:33:03.954 "dma_device_id": "system", 00:33:03.954 "dma_device_type": 1 00:33:03.954 }, 00:33:03.954 { 00:33:03.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.954 "dma_device_type": 2 00:33:03.954 } 00:33:03.954 ], 00:33:03.954 "driver_specific": {} 00:33:03.954 } 00:33:03.954 ] 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.954 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:04.213 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.213 "name": "Existed_Raid", 00:33:04.213 "uuid": "e2a8f2ce-9deb-42c2-b0dd-dc88b82948d4", 00:33:04.213 "strip_size_kb": 0, 00:33:04.213 "state": "online", 00:33:04.213 "raid_level": "raid1", 00:33:04.213 "superblock": true, 00:33:04.213 "num_base_bdevs": 2, 00:33:04.213 "num_base_bdevs_discovered": 2, 00:33:04.213 "num_base_bdevs_operational": 2, 00:33:04.213 "base_bdevs_list": [ 00:33:04.213 { 00:33:04.213 "name": "BaseBdev1", 00:33:04.213 "uuid": "319cf3c1-5f24-4920-bfdd-2975c081ed59", 00:33:04.213 "is_configured": true, 00:33:04.213 "data_offset": 256, 00:33:04.213 "data_size": 7936 00:33:04.213 }, 00:33:04.213 { 00:33:04.213 "name": "BaseBdev2", 00:33:04.213 "uuid": "c571efca-6abc-47c2-801a-5a973076f9ce", 00:33:04.213 "is_configured": true, 00:33:04.213 "data_offset": 256, 00:33:04.213 "data_size": 7936 00:33:04.213 } 00:33:04.213 ] 00:33:04.213 }' 00:33:04.213 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.213 16:50:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:05.146 [2024-07-24 16:50:01.973630] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:05.146 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:05.146 "name": "Existed_Raid", 00:33:05.146 "aliases": [ 00:33:05.147 "e2a8f2ce-9deb-42c2-b0dd-dc88b82948d4" 00:33:05.147 ], 00:33:05.147 "product_name": "Raid Volume", 00:33:05.147 "block_size": 4128, 00:33:05.147 "num_blocks": 7936, 00:33:05.147 "uuid": "e2a8f2ce-9deb-42c2-b0dd-dc88b82948d4", 00:33:05.147 "md_size": 32, 00:33:05.147 "md_interleave": true, 00:33:05.147 "dif_type": 0, 00:33:05.147 "assigned_rate_limits": { 00:33:05.147 "rw_ios_per_sec": 0, 00:33:05.147 "rw_mbytes_per_sec": 0, 00:33:05.147 "r_mbytes_per_sec": 0, 00:33:05.147 "w_mbytes_per_sec": 0 00:33:05.147 }, 00:33:05.147 "claimed": false, 00:33:05.147 "zoned": false, 00:33:05.147 "supported_io_types": { 00:33:05.147 "read": true, 00:33:05.147 "write": true, 00:33:05.147 "unmap": false, 00:33:05.147 "flush": false, 00:33:05.147 "reset": true, 00:33:05.147 "nvme_admin": false, 00:33:05.147 "nvme_io": false, 00:33:05.147 "nvme_io_md": false, 00:33:05.147 "write_zeroes": true, 00:33:05.147 "zcopy": false, 00:33:05.147 "get_zone_info": false, 00:33:05.147 "zone_management": false, 00:33:05.147 "zone_append": false, 00:33:05.147 "compare": false, 00:33:05.147 "compare_and_write": false, 00:33:05.147 "abort": false, 00:33:05.147 "seek_hole": false, 00:33:05.147 "seek_data": false, 00:33:05.147 "copy": false, 00:33:05.147 "nvme_iov_md": false 00:33:05.147 }, 00:33:05.147 "memory_domains": [ 00:33:05.147 { 00:33:05.147 "dma_device_id": "system", 00:33:05.147 "dma_device_type": 1 00:33:05.147 }, 00:33:05.147 { 00:33:05.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.147 "dma_device_type": 2 00:33:05.147 }, 00:33:05.147 { 00:33:05.147 "dma_device_id": "system", 00:33:05.147 "dma_device_type": 1 00:33:05.147 }, 00:33:05.147 { 00:33:05.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.147 "dma_device_type": 2 00:33:05.147 } 00:33:05.147 ], 00:33:05.147 "driver_specific": { 00:33:05.147 "raid": { 00:33:05.147 "uuid": "e2a8f2ce-9deb-42c2-b0dd-dc88b82948d4", 00:33:05.147 "strip_size_kb": 0, 00:33:05.147 "state": "online", 00:33:05.147 "raid_level": "raid1", 00:33:05.147 "superblock": true, 00:33:05.147 "num_base_bdevs": 2, 00:33:05.147 "num_base_bdevs_discovered": 2, 00:33:05.147 "num_base_bdevs_operational": 2, 00:33:05.147 "base_bdevs_list": [ 00:33:05.147 { 00:33:05.147 "name": "BaseBdev1", 00:33:05.147 "uuid": "319cf3c1-5f24-4920-bfdd-2975c081ed59", 00:33:05.147 "is_configured": true, 00:33:05.147 "data_offset": 256, 00:33:05.147 "data_size": 7936 00:33:05.147 }, 00:33:05.147 { 00:33:05.147 "name": "BaseBdev2", 00:33:05.147 "uuid": "c571efca-6abc-47c2-801a-5a973076f9ce", 00:33:05.147 "is_configured": true, 00:33:05.147 "data_offset": 256, 00:33:05.147 "data_size": 7936 00:33:05.147 } 00:33:05.147 ] 00:33:05.147 } 00:33:05.147 } 00:33:05.147 }' 00:33:05.147 16:50:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:05.405 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:05.405 BaseBdev2' 00:33:05.405 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:05.405 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:05.405 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:05.405 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:05.405 "name": "BaseBdev1", 00:33:05.405 "aliases": [ 00:33:05.405 "319cf3c1-5f24-4920-bfdd-2975c081ed59" 00:33:05.405 ], 00:33:05.405 "product_name": "Malloc disk", 00:33:05.405 "block_size": 4128, 00:33:05.405 "num_blocks": 8192, 00:33:05.405 "uuid": "319cf3c1-5f24-4920-bfdd-2975c081ed59", 00:33:05.405 "md_size": 32, 00:33:05.405 "md_interleave": true, 00:33:05.405 "dif_type": 0, 00:33:05.405 "assigned_rate_limits": { 00:33:05.405 "rw_ios_per_sec": 0, 00:33:05.405 "rw_mbytes_per_sec": 0, 00:33:05.405 "r_mbytes_per_sec": 0, 00:33:05.405 "w_mbytes_per_sec": 0 00:33:05.405 }, 00:33:05.405 "claimed": true, 00:33:05.405 "claim_type": "exclusive_write", 00:33:05.405 "zoned": false, 00:33:05.405 "supported_io_types": { 00:33:05.405 "read": true, 00:33:05.405 "write": true, 00:33:05.405 "unmap": true, 00:33:05.405 "flush": true, 00:33:05.405 "reset": true, 00:33:05.405 "nvme_admin": false, 00:33:05.405 "nvme_io": false, 00:33:05.405 "nvme_io_md": false, 00:33:05.405 "write_zeroes": true, 00:33:05.405 "zcopy": true, 00:33:05.405 "get_zone_info": false, 00:33:05.405 "zone_management": false, 00:33:05.405 "zone_append": false, 00:33:05.405 "compare": false, 00:33:05.405 "compare_and_write": false, 00:33:05.405 "abort": true, 00:33:05.405 "seek_hole": false, 00:33:05.405 "seek_data": false, 00:33:05.405 "copy": true, 00:33:05.405 "nvme_iov_md": false 00:33:05.405 }, 00:33:05.405 "memory_domains": [ 00:33:05.405 { 00:33:05.405 "dma_device_id": "system", 00:33:05.405 "dma_device_type": 1 00:33:05.406 }, 00:33:05.406 { 00:33:05.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:05.406 "dma_device_type": 2 00:33:05.406 } 00:33:05.406 ], 00:33:05.406 "driver_specific": {} 00:33:05.406 }' 00:33:05.406 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:05.663 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:05.920 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:05.920 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:05.921 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:05.921 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:05.921 16:50:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:06.485 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:06.485 "name": "BaseBdev2", 00:33:06.485 "aliases": [ 00:33:06.485 "c571efca-6abc-47c2-801a-5a973076f9ce" 00:33:06.485 ], 00:33:06.485 "product_name": "Malloc disk", 00:33:06.485 "block_size": 4128, 00:33:06.485 "num_blocks": 8192, 00:33:06.485 "uuid": "c571efca-6abc-47c2-801a-5a973076f9ce", 00:33:06.485 "md_size": 32, 00:33:06.485 "md_interleave": true, 00:33:06.485 "dif_type": 0, 00:33:06.485 "assigned_rate_limits": { 00:33:06.485 "rw_ios_per_sec": 0, 00:33:06.485 "rw_mbytes_per_sec": 0, 00:33:06.485 "r_mbytes_per_sec": 0, 00:33:06.485 "w_mbytes_per_sec": 0 00:33:06.485 }, 00:33:06.485 "claimed": true, 00:33:06.485 "claim_type": "exclusive_write", 00:33:06.485 "zoned": false, 00:33:06.485 "supported_io_types": { 00:33:06.485 "read": true, 00:33:06.485 "write": true, 00:33:06.485 "unmap": true, 00:33:06.485 "flush": true, 00:33:06.485 "reset": true, 00:33:06.485 "nvme_admin": false, 00:33:06.485 "nvme_io": false, 00:33:06.485 "nvme_io_md": false, 00:33:06.485 "write_zeroes": true, 00:33:06.485 "zcopy": true, 00:33:06.485 "get_zone_info": false, 00:33:06.485 "zone_management": false, 00:33:06.485 "zone_append": false, 00:33:06.485 "compare": false, 00:33:06.485 "compare_and_write": false, 00:33:06.485 "abort": true, 00:33:06.485 "seek_hole": false, 00:33:06.485 "seek_data": false, 00:33:06.485 "copy": true, 00:33:06.485 "nvme_iov_md": false 00:33:06.485 }, 00:33:06.485 "memory_domains": [ 00:33:06.485 { 00:33:06.485 "dma_device_id": "system", 00:33:06.485 "dma_device_type": 1 00:33:06.486 }, 00:33:06.486 { 00:33:06.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:06.486 "dma_device_type": 2 00:33:06.486 } 00:33:06.486 ], 00:33:06.486 "driver_specific": {} 00:33:06.486 }' 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:06.486 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:06.742 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:06.742 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:06.742 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:06.742 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:06.742 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:06.998 [2024-07-24 16:50:03.698040] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:06.998 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.999 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:07.256 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:07.256 "name": "Existed_Raid", 00:33:07.256 "uuid": "e2a8f2ce-9deb-42c2-b0dd-dc88b82948d4", 00:33:07.256 "strip_size_kb": 0, 00:33:07.256 "state": "online", 00:33:07.256 "raid_level": "raid1", 00:33:07.256 "superblock": true, 00:33:07.256 "num_base_bdevs": 2, 00:33:07.256 "num_base_bdevs_discovered": 1, 00:33:07.256 "num_base_bdevs_operational": 1, 00:33:07.256 "base_bdevs_list": [ 00:33:07.256 { 00:33:07.256 "name": null, 00:33:07.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.256 "is_configured": false, 00:33:07.256 "data_offset": 256, 00:33:07.256 "data_size": 7936 00:33:07.256 }, 00:33:07.256 { 00:33:07.256 "name": "BaseBdev2", 00:33:07.256 "uuid": "c571efca-6abc-47c2-801a-5a973076f9ce", 00:33:07.256 "is_configured": true, 00:33:07.256 "data_offset": 256, 00:33:07.256 "data_size": 7936 00:33:07.256 } 00:33:07.256 ] 00:33:07.256 }' 00:33:07.256 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:07.256 16:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:07.821 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:07.821 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:07.821 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.821 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:08.080 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:08.080 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:08.080 16:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:08.338 [2024-07-24 16:50:05.016231] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:08.338 [2024-07-24 16:50:05.016350] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:08.338 [2024-07-24 16:50:05.156871] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:08.338 [2024-07-24 16:50:05.156923] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:08.338 [2024-07-24 16:50:05.156941] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600003ff80 name Existed_Raid, state offline 00:33:08.338 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:08.338 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:08.338 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:08.338 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 1804415 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 1804415 ']' 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 1804415 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1804415 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1804415' 00:33:08.597 killing process with pid 1804415 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 1804415 00:33:08.597 [2024-07-24 16:50:05.404593] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:08.597 16:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 1804415 00:33:08.597 [2024-07-24 16:50:05.429460] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:10.499 16:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:33:10.499 00:33:10.499 real 0m12.643s 00:33:10.499 user 0m20.719s 00:33:10.499 sys 0m2.205s 00:33:10.499 16:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.499 16:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.499 ************************************ 00:33:10.499 END TEST raid_state_function_test_sb_md_interleaved 00:33:10.499 ************************************ 00:33:10.499 16:50:07 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:10.499 16:50:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:10.499 16:50:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.499 16:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:10.499 ************************************ 00:33:10.499 START TEST raid_superblock_test_md_interleaved 00:33:10.499 ************************************ 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=1806750 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 1806750 /var/tmp/spdk-raid.sock 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 1806750 ']' 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:10.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:10.499 16:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:10.499 [2024-07-24 16:50:07.242189] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:33:10.500 [2024-07-24 16:50:07.242312] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806750 ] 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:10.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:10.758 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:10.758 [2024-07-24 16:50:07.469516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.017 [2024-07-24 16:50:07.752444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.276 [2024-07-24 16:50:08.102849] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:11.276 [2024-07-24 16:50:08.102881] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:11.535 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:11.793 malloc1 00:33:11.793 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:12.051 [2024-07-24 16:50:08.780431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:12.051 [2024-07-24 16:50:08.780495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.052 [2024-07-24 16:50:08.780526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:33:12.052 [2024-07-24 16:50:08.780543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.052 [2024-07-24 16:50:08.782960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.052 [2024-07-24 16:50:08.782993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:12.052 pt1 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:12.052 16:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:12.310 malloc2 00:33:12.310 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:12.569 [2024-07-24 16:50:09.284470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:12.569 [2024-07-24 16:50:09.284530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:12.569 [2024-07-24 16:50:09.284563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:33:12.569 [2024-07-24 16:50:09.284581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:12.569 [2024-07-24 16:50:09.287013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:12.569 [2024-07-24 16:50:09.287051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:12.569 pt2 00:33:12.569 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:12.569 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:12.569 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:33:12.827 [2024-07-24 16:50:09.497063] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:12.827 [2024-07-24 16:50:09.499419] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:12.828 [2024-07-24 16:50:09.499655] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000040880 00:33:12.828 [2024-07-24 16:50:09.499675] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:12.828 [2024-07-24 16:50:09.499779] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:33:12.828 [2024-07-24 16:50:09.499902] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000040880 00:33:12.828 [2024-07-24 16:50:09.499920] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000040880 00:33:12.828 [2024-07-24 16:50:09.500017] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.828 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:13.086 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.086 "name": "raid_bdev1", 00:33:13.086 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:13.086 "strip_size_kb": 0, 00:33:13.086 "state": "online", 00:33:13.086 "raid_level": "raid1", 00:33:13.086 "superblock": true, 00:33:13.086 "num_base_bdevs": 2, 00:33:13.086 "num_base_bdevs_discovered": 2, 00:33:13.087 "num_base_bdevs_operational": 2, 00:33:13.087 "base_bdevs_list": [ 00:33:13.087 { 00:33:13.087 "name": "pt1", 00:33:13.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:13.087 "is_configured": true, 00:33:13.087 "data_offset": 256, 00:33:13.087 "data_size": 7936 00:33:13.087 }, 00:33:13.087 { 00:33:13.087 "name": "pt2", 00:33:13.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.087 "is_configured": true, 00:33:13.087 "data_offset": 256, 00:33:13.087 "data_size": 7936 00:33:13.087 } 00:33:13.087 ] 00:33:13.087 }' 00:33:13.087 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.087 16:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:13.652 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:33:13.652 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:13.652 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:13.652 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:13.652 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:13.652 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:13.653 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:13.653 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:13.911 [2024-07-24 16:50:10.544199] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:13.911 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:13.911 "name": "raid_bdev1", 00:33:13.911 "aliases": [ 00:33:13.911 "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5" 00:33:13.911 ], 00:33:13.911 "product_name": "Raid Volume", 00:33:13.911 "block_size": 4128, 00:33:13.911 "num_blocks": 7936, 00:33:13.911 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:13.911 "md_size": 32, 00:33:13.911 "md_interleave": true, 00:33:13.911 "dif_type": 0, 00:33:13.911 "assigned_rate_limits": { 00:33:13.911 "rw_ios_per_sec": 0, 00:33:13.911 "rw_mbytes_per_sec": 0, 00:33:13.911 "r_mbytes_per_sec": 0, 00:33:13.911 "w_mbytes_per_sec": 0 00:33:13.911 }, 00:33:13.911 "claimed": false, 00:33:13.911 "zoned": false, 00:33:13.911 "supported_io_types": { 00:33:13.911 "read": true, 00:33:13.911 "write": true, 00:33:13.911 "unmap": false, 00:33:13.911 "flush": false, 00:33:13.911 "reset": true, 00:33:13.911 "nvme_admin": false, 00:33:13.911 "nvme_io": false, 00:33:13.911 "nvme_io_md": false, 00:33:13.911 "write_zeroes": true, 00:33:13.911 "zcopy": false, 00:33:13.911 "get_zone_info": false, 00:33:13.911 "zone_management": false, 00:33:13.911 "zone_append": false, 00:33:13.911 "compare": false, 00:33:13.911 "compare_and_write": false, 00:33:13.911 "abort": false, 00:33:13.911 "seek_hole": false, 00:33:13.911 "seek_data": false, 00:33:13.911 "copy": false, 00:33:13.911 "nvme_iov_md": false 00:33:13.911 }, 00:33:13.911 "memory_domains": [ 00:33:13.911 { 00:33:13.911 "dma_device_id": "system", 00:33:13.911 "dma_device_type": 1 00:33:13.911 }, 00:33:13.911 { 00:33:13.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.911 "dma_device_type": 2 00:33:13.911 }, 00:33:13.911 { 00:33:13.911 "dma_device_id": "system", 00:33:13.911 "dma_device_type": 1 00:33:13.911 }, 00:33:13.911 { 00:33:13.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:13.911 "dma_device_type": 2 00:33:13.911 } 00:33:13.911 ], 00:33:13.911 "driver_specific": { 00:33:13.911 "raid": { 00:33:13.911 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:13.911 "strip_size_kb": 0, 00:33:13.911 "state": "online", 00:33:13.911 "raid_level": "raid1", 00:33:13.911 "superblock": true, 00:33:13.911 "num_base_bdevs": 2, 00:33:13.911 "num_base_bdevs_discovered": 2, 00:33:13.911 "num_base_bdevs_operational": 2, 00:33:13.911 "base_bdevs_list": [ 00:33:13.911 { 00:33:13.911 "name": "pt1", 00:33:13.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:13.911 "is_configured": true, 00:33:13.911 "data_offset": 256, 00:33:13.911 "data_size": 7936 00:33:13.911 }, 00:33:13.911 { 00:33:13.911 "name": "pt2", 00:33:13.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:13.911 "is_configured": true, 00:33:13.911 "data_offset": 256, 00:33:13.911 "data_size": 7936 00:33:13.911 } 00:33:13.911 ] 00:33:13.911 } 00:33:13.911 } 00:33:13.911 }' 00:33:13.911 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:13.911 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:13.911 pt2' 00:33:13.911 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:13.911 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:13.911 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:14.170 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:14.170 "name": "pt1", 00:33:14.170 "aliases": [ 00:33:14.170 "00000000-0000-0000-0000-000000000001" 00:33:14.170 ], 00:33:14.170 "product_name": "passthru", 00:33:14.170 "block_size": 4128, 00:33:14.170 "num_blocks": 8192, 00:33:14.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:14.170 "md_size": 32, 00:33:14.170 "md_interleave": true, 00:33:14.170 "dif_type": 0, 00:33:14.170 "assigned_rate_limits": { 00:33:14.170 "rw_ios_per_sec": 0, 00:33:14.170 "rw_mbytes_per_sec": 0, 00:33:14.170 "r_mbytes_per_sec": 0, 00:33:14.170 "w_mbytes_per_sec": 0 00:33:14.170 }, 00:33:14.170 "claimed": true, 00:33:14.170 "claim_type": "exclusive_write", 00:33:14.170 "zoned": false, 00:33:14.170 "supported_io_types": { 00:33:14.170 "read": true, 00:33:14.170 "write": true, 00:33:14.170 "unmap": true, 00:33:14.170 "flush": true, 00:33:14.170 "reset": true, 00:33:14.170 "nvme_admin": false, 00:33:14.170 "nvme_io": false, 00:33:14.170 "nvme_io_md": false, 00:33:14.170 "write_zeroes": true, 00:33:14.170 "zcopy": true, 00:33:14.170 "get_zone_info": false, 00:33:14.170 "zone_management": false, 00:33:14.170 "zone_append": false, 00:33:14.170 "compare": false, 00:33:14.170 "compare_and_write": false, 00:33:14.170 "abort": true, 00:33:14.170 "seek_hole": false, 00:33:14.170 "seek_data": false, 00:33:14.170 "copy": true, 00:33:14.170 "nvme_iov_md": false 00:33:14.170 }, 00:33:14.170 "memory_domains": [ 00:33:14.170 { 00:33:14.170 "dma_device_id": "system", 00:33:14.170 "dma_device_type": 1 00:33:14.170 }, 00:33:14.170 { 00:33:14.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.170 "dma_device_type": 2 00:33:14.170 } 00:33:14.170 ], 00:33:14.170 "driver_specific": { 00:33:14.170 "passthru": { 00:33:14.170 "name": "pt1", 00:33:14.170 "base_bdev_name": "malloc1" 00:33:14.170 } 00:33:14.170 } 00:33:14.170 }' 00:33:14.170 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:14.170 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:14.170 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:14.170 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:14.170 16:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:14.170 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:14.170 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:14.428 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:14.730 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:14.730 "name": "pt2", 00:33:14.730 "aliases": [ 00:33:14.730 "00000000-0000-0000-0000-000000000002" 00:33:14.730 ], 00:33:14.730 "product_name": "passthru", 00:33:14.730 "block_size": 4128, 00:33:14.730 "num_blocks": 8192, 00:33:14.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:14.730 "md_size": 32, 00:33:14.730 "md_interleave": true, 00:33:14.730 "dif_type": 0, 00:33:14.730 "assigned_rate_limits": { 00:33:14.730 "rw_ios_per_sec": 0, 00:33:14.730 "rw_mbytes_per_sec": 0, 00:33:14.730 "r_mbytes_per_sec": 0, 00:33:14.730 "w_mbytes_per_sec": 0 00:33:14.730 }, 00:33:14.730 "claimed": true, 00:33:14.730 "claim_type": "exclusive_write", 00:33:14.730 "zoned": false, 00:33:14.730 "supported_io_types": { 00:33:14.730 "read": true, 00:33:14.730 "write": true, 00:33:14.730 "unmap": true, 00:33:14.730 "flush": true, 00:33:14.730 "reset": true, 00:33:14.730 "nvme_admin": false, 00:33:14.730 "nvme_io": false, 00:33:14.730 "nvme_io_md": false, 00:33:14.730 "write_zeroes": true, 00:33:14.730 "zcopy": true, 00:33:14.730 "get_zone_info": false, 00:33:14.730 "zone_management": false, 00:33:14.730 "zone_append": false, 00:33:14.730 "compare": false, 00:33:14.730 "compare_and_write": false, 00:33:14.730 "abort": true, 00:33:14.730 "seek_hole": false, 00:33:14.730 "seek_data": false, 00:33:14.730 "copy": true, 00:33:14.731 "nvme_iov_md": false 00:33:14.731 }, 00:33:14.731 "memory_domains": [ 00:33:14.731 { 00:33:14.731 "dma_device_id": "system", 00:33:14.731 "dma_device_type": 1 00:33:14.731 }, 00:33:14.731 { 00:33:14.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:14.731 "dma_device_type": 2 00:33:14.731 } 00:33:14.731 ], 00:33:14.731 "driver_specific": { 00:33:14.731 "passthru": { 00:33:14.731 "name": "pt2", 00:33:14.731 "base_bdev_name": "malloc2" 00:33:14.731 } 00:33:14.731 } 00:33:14.731 }' 00:33:14.731 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:14.731 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:14.731 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:14.731 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:14.731 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:15.017 16:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:33:15.275 [2024-07-24 16:50:12.004215] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:15.275 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5 00:33:15.275 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z 9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5 ']' 00:33:15.275 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:15.534 [2024-07-24 16:50:12.236489] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:15.534 [2024-07-24 16:50:12.236522] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:15.534 [2024-07-24 16:50:12.236615] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:15.534 [2024-07-24 16:50:12.236684] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:15.534 [2024-07-24 16:50:12.236706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040880 name raid_bdev1, state offline 00:33:15.534 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.534 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:33:15.793 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:33:15.793 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:33:15.793 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:15.793 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:16.050 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:16.051 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:16.309 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:16.309 16:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:16.309 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:16.568 [2024-07-24 16:50:13.363492] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:16.568 [2024-07-24 16:50:13.365806] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:16.568 [2024-07-24 16:50:13.365882] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:16.568 [2024-07-24 16:50:13.365941] bdev_raid.c:3196:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:16.568 [2024-07-24 16:50:13.365964] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:16.568 [2024-07-24 16:50:13.365982] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000040e80 name raid_bdev1, state configuring 00:33:16.568 request: 00:33:16.568 { 00:33:16.568 "name": "raid_bdev1", 00:33:16.568 "raid_level": "raid1", 00:33:16.568 "base_bdevs": [ 00:33:16.568 "malloc1", 00:33:16.568 "malloc2" 00:33:16.568 ], 00:33:16.568 "superblock": false, 00:33:16.568 "method": "bdev_raid_create", 00:33:16.568 "req_id": 1 00:33:16.568 } 00:33:16.568 Got JSON-RPC error response 00:33:16.568 response: 00:33:16.568 { 00:33:16.568 "code": -17, 00:33:16.568 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:16.568 } 00:33:16.568 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:16.568 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:16.568 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:16.568 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:16.568 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:33:16.568 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.826 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:33:16.826 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:33:16.826 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:17.085 [2024-07-24 16:50:13.804616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:17.085 [2024-07-24 16:50:13.804684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:17.085 [2024-07-24 16:50:13.804708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:33:17.085 [2024-07-24 16:50:13.804730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:17.085 [2024-07-24 16:50:13.807129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:17.085 [2024-07-24 16:50:13.807173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:17.085 [2024-07-24 16:50:13.807231] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:17.085 [2024-07-24 16:50:13.807305] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:17.085 pt1 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.085 16:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.344 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.344 "name": "raid_bdev1", 00:33:17.344 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:17.344 "strip_size_kb": 0, 00:33:17.344 "state": "configuring", 00:33:17.344 "raid_level": "raid1", 00:33:17.344 "superblock": true, 00:33:17.344 "num_base_bdevs": 2, 00:33:17.344 "num_base_bdevs_discovered": 1, 00:33:17.344 "num_base_bdevs_operational": 2, 00:33:17.344 "base_bdevs_list": [ 00:33:17.344 { 00:33:17.344 "name": "pt1", 00:33:17.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:17.344 "is_configured": true, 00:33:17.344 "data_offset": 256, 00:33:17.344 "data_size": 7936 00:33:17.344 }, 00:33:17.344 { 00:33:17.344 "name": null, 00:33:17.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:17.344 "is_configured": false, 00:33:17.344 "data_offset": 256, 00:33:17.344 "data_size": 7936 00:33:17.344 } 00:33:17.344 ] 00:33:17.344 }' 00:33:17.344 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.344 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.910 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:33:17.910 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:33:17.910 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:17.910 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:18.169 [2024-07-24 16:50:14.835408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:18.169 [2024-07-24 16:50:14.835482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:18.169 [2024-07-24 16:50:14.835508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:33:18.169 [2024-07-24 16:50:14.835527] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:18.169 [2024-07-24 16:50:14.835763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:18.169 [2024-07-24 16:50:14.835786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:18.169 [2024-07-24 16:50:14.835848] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:18.169 [2024-07-24 16:50:14.835886] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:18.169 [2024-07-24 16:50:14.836015] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:33:18.169 [2024-07-24 16:50:14.836033] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:18.169 [2024-07-24 16:50:14.836115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:33:18.169 [2024-07-24 16:50:14.836240] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:33:18.169 [2024-07-24 16:50:14.836255] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:33:18.169 [2024-07-24 16:50:14.836346] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:18.169 pt2 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.169 16:50:14 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.428 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:18.428 "name": "raid_bdev1", 00:33:18.428 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:18.428 "strip_size_kb": 0, 00:33:18.428 "state": "online", 00:33:18.428 "raid_level": "raid1", 00:33:18.428 "superblock": true, 00:33:18.428 "num_base_bdevs": 2, 00:33:18.428 "num_base_bdevs_discovered": 2, 00:33:18.428 "num_base_bdevs_operational": 2, 00:33:18.428 "base_bdevs_list": [ 00:33:18.428 { 00:33:18.428 "name": "pt1", 00:33:18.428 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:18.428 "is_configured": true, 00:33:18.428 "data_offset": 256, 00:33:18.428 "data_size": 7936 00:33:18.428 }, 00:33:18.428 { 00:33:18.428 "name": "pt2", 00:33:18.428 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:18.428 "is_configured": true, 00:33:18.428 "data_offset": 256, 00:33:18.428 "data_size": 7936 00:33:18.428 } 00:33:18.428 ] 00:33:18.428 }' 00:33:18.428 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:18.428 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:18.995 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:19.253 [2024-07-24 16:50:15.862543] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:19.253 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:19.253 "name": "raid_bdev1", 00:33:19.253 "aliases": [ 00:33:19.253 "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5" 00:33:19.253 ], 00:33:19.253 "product_name": "Raid Volume", 00:33:19.253 "block_size": 4128, 00:33:19.253 "num_blocks": 7936, 00:33:19.253 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:19.253 "md_size": 32, 00:33:19.253 "md_interleave": true, 00:33:19.253 "dif_type": 0, 00:33:19.253 "assigned_rate_limits": { 00:33:19.253 "rw_ios_per_sec": 0, 00:33:19.253 "rw_mbytes_per_sec": 0, 00:33:19.253 "r_mbytes_per_sec": 0, 00:33:19.253 "w_mbytes_per_sec": 0 00:33:19.253 }, 00:33:19.253 "claimed": false, 00:33:19.253 "zoned": false, 00:33:19.253 "supported_io_types": { 00:33:19.253 "read": true, 00:33:19.253 "write": true, 00:33:19.253 "unmap": false, 00:33:19.253 "flush": false, 00:33:19.253 "reset": true, 00:33:19.253 "nvme_admin": false, 00:33:19.253 "nvme_io": false, 00:33:19.253 "nvme_io_md": false, 00:33:19.253 "write_zeroes": true, 00:33:19.253 "zcopy": false, 00:33:19.253 "get_zone_info": false, 00:33:19.253 "zone_management": false, 00:33:19.253 "zone_append": false, 00:33:19.253 "compare": false, 00:33:19.253 "compare_and_write": false, 00:33:19.253 "abort": false, 00:33:19.253 "seek_hole": false, 00:33:19.253 "seek_data": false, 00:33:19.253 "copy": false, 00:33:19.253 "nvme_iov_md": false 00:33:19.253 }, 00:33:19.253 "memory_domains": [ 00:33:19.253 { 00:33:19.253 "dma_device_id": "system", 00:33:19.253 "dma_device_type": 1 00:33:19.253 }, 00:33:19.253 { 00:33:19.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.253 "dma_device_type": 2 00:33:19.253 }, 00:33:19.253 { 00:33:19.253 "dma_device_id": "system", 00:33:19.253 "dma_device_type": 1 00:33:19.253 }, 00:33:19.253 { 00:33:19.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.253 "dma_device_type": 2 00:33:19.253 } 00:33:19.253 ], 00:33:19.253 "driver_specific": { 00:33:19.253 "raid": { 00:33:19.253 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:19.253 "strip_size_kb": 0, 00:33:19.253 "state": "online", 00:33:19.253 "raid_level": "raid1", 00:33:19.253 "superblock": true, 00:33:19.253 "num_base_bdevs": 2, 00:33:19.253 "num_base_bdevs_discovered": 2, 00:33:19.253 "num_base_bdevs_operational": 2, 00:33:19.253 "base_bdevs_list": [ 00:33:19.253 { 00:33:19.253 "name": "pt1", 00:33:19.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:19.253 "is_configured": true, 00:33:19.253 "data_offset": 256, 00:33:19.253 "data_size": 7936 00:33:19.253 }, 00:33:19.253 { 00:33:19.253 "name": "pt2", 00:33:19.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:19.253 "is_configured": true, 00:33:19.253 "data_offset": 256, 00:33:19.253 "data_size": 7936 00:33:19.253 } 00:33:19.253 ] 00:33:19.253 } 00:33:19.253 } 00:33:19.253 }' 00:33:19.253 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:19.253 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:19.253 pt2' 00:33:19.253 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:19.253 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:19.253 16:50:15 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:19.511 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:19.511 "name": "pt1", 00:33:19.511 "aliases": [ 00:33:19.511 "00000000-0000-0000-0000-000000000001" 00:33:19.511 ], 00:33:19.511 "product_name": "passthru", 00:33:19.511 "block_size": 4128, 00:33:19.511 "num_blocks": 8192, 00:33:19.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:19.511 "md_size": 32, 00:33:19.511 "md_interleave": true, 00:33:19.511 "dif_type": 0, 00:33:19.511 "assigned_rate_limits": { 00:33:19.511 "rw_ios_per_sec": 0, 00:33:19.511 "rw_mbytes_per_sec": 0, 00:33:19.511 "r_mbytes_per_sec": 0, 00:33:19.511 "w_mbytes_per_sec": 0 00:33:19.511 }, 00:33:19.511 "claimed": true, 00:33:19.511 "claim_type": "exclusive_write", 00:33:19.511 "zoned": false, 00:33:19.511 "supported_io_types": { 00:33:19.511 "read": true, 00:33:19.511 "write": true, 00:33:19.511 "unmap": true, 00:33:19.511 "flush": true, 00:33:19.511 "reset": true, 00:33:19.511 "nvme_admin": false, 00:33:19.511 "nvme_io": false, 00:33:19.511 "nvme_io_md": false, 00:33:19.511 "write_zeroes": true, 00:33:19.511 "zcopy": true, 00:33:19.511 "get_zone_info": false, 00:33:19.511 "zone_management": false, 00:33:19.511 "zone_append": false, 00:33:19.511 "compare": false, 00:33:19.511 "compare_and_write": false, 00:33:19.511 "abort": true, 00:33:19.511 "seek_hole": false, 00:33:19.511 "seek_data": false, 00:33:19.511 "copy": true, 00:33:19.511 "nvme_iov_md": false 00:33:19.511 }, 00:33:19.511 "memory_domains": [ 00:33:19.511 { 00:33:19.511 "dma_device_id": "system", 00:33:19.511 "dma_device_type": 1 00:33:19.511 }, 00:33:19.511 { 00:33:19.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.511 "dma_device_type": 2 00:33:19.512 } 00:33:19.512 ], 00:33:19.512 "driver_specific": { 00:33:19.512 "passthru": { 00:33:19.512 "name": "pt1", 00:33:19.512 "base_bdev_name": "malloc1" 00:33:19.512 } 00:33:19.512 } 00:33:19.512 }' 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.512 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:19.770 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:20.029 "name": "pt2", 00:33:20.029 "aliases": [ 00:33:20.029 "00000000-0000-0000-0000-000000000002" 00:33:20.029 ], 00:33:20.029 "product_name": "passthru", 00:33:20.029 "block_size": 4128, 00:33:20.029 "num_blocks": 8192, 00:33:20.029 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:20.029 "md_size": 32, 00:33:20.029 "md_interleave": true, 00:33:20.029 "dif_type": 0, 00:33:20.029 "assigned_rate_limits": { 00:33:20.029 "rw_ios_per_sec": 0, 00:33:20.029 "rw_mbytes_per_sec": 0, 00:33:20.029 "r_mbytes_per_sec": 0, 00:33:20.029 "w_mbytes_per_sec": 0 00:33:20.029 }, 00:33:20.029 "claimed": true, 00:33:20.029 "claim_type": "exclusive_write", 00:33:20.029 "zoned": false, 00:33:20.029 "supported_io_types": { 00:33:20.029 "read": true, 00:33:20.029 "write": true, 00:33:20.029 "unmap": true, 00:33:20.029 "flush": true, 00:33:20.029 "reset": true, 00:33:20.029 "nvme_admin": false, 00:33:20.029 "nvme_io": false, 00:33:20.029 "nvme_io_md": false, 00:33:20.029 "write_zeroes": true, 00:33:20.029 "zcopy": true, 00:33:20.029 "get_zone_info": false, 00:33:20.029 "zone_management": false, 00:33:20.029 "zone_append": false, 00:33:20.029 "compare": false, 00:33:20.029 "compare_and_write": false, 00:33:20.029 "abort": true, 00:33:20.029 "seek_hole": false, 00:33:20.029 "seek_data": false, 00:33:20.029 "copy": true, 00:33:20.029 "nvme_iov_md": false 00:33:20.029 }, 00:33:20.029 "memory_domains": [ 00:33:20.029 { 00:33:20.029 "dma_device_id": "system", 00:33:20.029 "dma_device_type": 1 00:33:20.029 }, 00:33:20.029 { 00:33:20.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.029 "dma_device_type": 2 00:33:20.029 } 00:33:20.029 ], 00:33:20.029 "driver_specific": { 00:33:20.029 "passthru": { 00:33:20.029 "name": "pt2", 00:33:20.029 "base_bdev_name": "malloc2" 00:33:20.029 } 00:33:20.029 } 00:33:20.029 }' 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:20.029 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:20.287 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:20.288 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.288 16:50:16 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.288 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:20.288 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:20.288 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:33:20.546 [2024-07-24 16:50:17.250352] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:20.546 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' 9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5 '!=' 9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5 ']' 00:33:20.546 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:33:20.546 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:20.546 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:20.546 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:20.805 [2024-07-24 16:50:17.470603] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.805 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.063 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:21.063 "name": "raid_bdev1", 00:33:21.063 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:21.063 "strip_size_kb": 0, 00:33:21.063 "state": "online", 00:33:21.063 "raid_level": "raid1", 00:33:21.063 "superblock": true, 00:33:21.063 "num_base_bdevs": 2, 00:33:21.063 "num_base_bdevs_discovered": 1, 00:33:21.063 "num_base_bdevs_operational": 1, 00:33:21.063 "base_bdevs_list": [ 00:33:21.063 { 00:33:21.063 "name": null, 00:33:21.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.063 "is_configured": false, 00:33:21.063 "data_offset": 256, 00:33:21.063 "data_size": 7936 00:33:21.063 }, 00:33:21.063 { 00:33:21.063 "name": "pt2", 00:33:21.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:21.063 "is_configured": true, 00:33:21.063 "data_offset": 256, 00:33:21.063 "data_size": 7936 00:33:21.063 } 00:33:21.063 ] 00:33:21.063 }' 00:33:21.063 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:21.063 16:50:17 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.629 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:21.888 [2024-07-24 16:50:18.493323] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:21.888 [2024-07-24 16:50:18.493353] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:21.888 [2024-07-24 16:50:18.493436] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:21.888 [2024-07-24 16:50:18.493491] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:21.888 [2024-07-24 16:50:18.493510] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:21.888 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:22.147 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:22.147 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:22.147 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:33:22.147 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:33:22.147 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:33:22.147 16:50:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:22.405 [2024-07-24 16:50:19.171150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:22.405 [2024-07-24 16:50:19.171244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:22.405 [2024-07-24 16:50:19.171270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:33:22.405 [2024-07-24 16:50:19.171288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:22.405 [2024-07-24 16:50:19.173736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:22.405 [2024-07-24 16:50:19.173773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:22.405 [2024-07-24 16:50:19.173835] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:22.405 [2024-07-24 16:50:19.173906] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:22.405 [2024-07-24 16:50:19.174016] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042680 00:33:22.405 [2024-07-24 16:50:19.174034] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:22.405 [2024-07-24 16:50:19.174115] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:33:22.405 [2024-07-24 16:50:19.174260] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042680 00:33:22.405 [2024-07-24 16:50:19.174275] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042680 00:33:22.406 [2024-07-24 16:50:19.174363] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:22.406 pt2 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.406 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.664 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:22.664 "name": "raid_bdev1", 00:33:22.664 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:22.664 "strip_size_kb": 0, 00:33:22.664 "state": "online", 00:33:22.664 "raid_level": "raid1", 00:33:22.664 "superblock": true, 00:33:22.664 "num_base_bdevs": 2, 00:33:22.664 "num_base_bdevs_discovered": 1, 00:33:22.664 "num_base_bdevs_operational": 1, 00:33:22.664 "base_bdevs_list": [ 00:33:22.664 { 00:33:22.664 "name": null, 00:33:22.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.664 "is_configured": false, 00:33:22.664 "data_offset": 256, 00:33:22.664 "data_size": 7936 00:33:22.664 }, 00:33:22.664 { 00:33:22.664 "name": "pt2", 00:33:22.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:22.665 "is_configured": true, 00:33:22.665 "data_offset": 256, 00:33:22.665 "data_size": 7936 00:33:22.665 } 00:33:22.665 ] 00:33:22.665 }' 00:33:22.665 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:22.665 16:50:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:23.231 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:23.489 [2024-07-24 16:50:20.213977] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:23.489 [2024-07-24 16:50:20.214013] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:23.489 [2024-07-24 16:50:20.214095] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:23.489 [2024-07-24 16:50:20.214168] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:23.489 [2024-07-24 16:50:20.214189] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042680 name raid_bdev1, state offline 00:33:23.489 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.489 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:33:23.748 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:33:23.748 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:33:23.748 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:33:23.748 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:24.007 [2024-07-24 16:50:20.671158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:24.007 [2024-07-24 16:50:20.671220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.007 [2024-07-24 16:50:20.671246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:33:24.007 [2024-07-24 16:50:20.671266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.007 [2024-07-24 16:50:20.673693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.007 [2024-07-24 16:50:20.673725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:24.007 [2024-07-24 16:50:20.673784] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:24.007 [2024-07-24 16:50:20.673874] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:24.007 [2024-07-24 16:50:20.674029] bdev_raid.c:3639:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:24.007 [2024-07-24 16:50:20.674046] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:24.007 [2024-07-24 16:50:20.674071] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042f80 name raid_bdev1, state configuring 00:33:24.007 [2024-07-24 16:50:20.674173] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:24.007 [2024-07-24 16:50:20.674265] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000043280 00:33:24.007 [2024-07-24 16:50:20.674278] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:24.007 [2024-07-24 16:50:20.674356] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:33:24.007 [2024-07-24 16:50:20.674473] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000043280 00:33:24.007 [2024-07-24 16:50:20.674490] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000043280 00:33:24.007 [2024-07-24 16:50:20.674580] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.007 pt1 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.007 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.265 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.265 "name": "raid_bdev1", 00:33:24.265 "uuid": "9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5", 00:33:24.265 "strip_size_kb": 0, 00:33:24.265 "state": "online", 00:33:24.265 "raid_level": "raid1", 00:33:24.265 "superblock": true, 00:33:24.265 "num_base_bdevs": 2, 00:33:24.265 "num_base_bdevs_discovered": 1, 00:33:24.265 "num_base_bdevs_operational": 1, 00:33:24.265 "base_bdevs_list": [ 00:33:24.265 { 00:33:24.265 "name": null, 00:33:24.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.265 "is_configured": false, 00:33:24.265 "data_offset": 256, 00:33:24.265 "data_size": 7936 00:33:24.265 }, 00:33:24.265 { 00:33:24.265 "name": "pt2", 00:33:24.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:24.265 "is_configured": true, 00:33:24.265 "data_offset": 256, 00:33:24.265 "data_size": 7936 00:33:24.265 } 00:33:24.265 ] 00:33:24.265 }' 00:33:24.265 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.265 16:50:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.832 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:24.832 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:25.090 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:33:25.091 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:25.091 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:33:25.349 [2024-07-24 16:50:21.954959] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' 9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5 '!=' 9a5a7935-8b6e-4182-bbfb-4bc92eb1d3c5 ']' 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 1806750 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 1806750 ']' 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 1806750 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:25.349 16:50:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1806750 00:33:25.349 16:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:25.349 16:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:25.349 16:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1806750' 00:33:25.349 killing process with pid 1806750 00:33:25.349 16:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 1806750 00:33:25.349 [2024-07-24 16:50:22.018687] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:25.349 [2024-07-24 16:50:22.018794] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.349 [2024-07-24 16:50:22.018851] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.349 16:50:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 1806750 00:33:25.349 [2024-07-24 16:50:22.018874] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000043280 name raid_bdev1, state offline 00:33:25.349 [2024-07-24 16:50:22.200579] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:27.258 16:50:23 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:33:27.258 00:33:27.258 real 0m16.742s 00:33:27.258 user 0m28.573s 00:33:27.258 sys 0m2.850s 00:33:27.258 16:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:27.258 16:50:23 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:27.258 ************************************ 00:33:27.258 END TEST raid_superblock_test_md_interleaved 00:33:27.258 ************************************ 00:33:27.258 16:50:23 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:27.258 16:50:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:27.258 16:50:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:27.258 16:50:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:27.258 ************************************ 00:33:27.258 START TEST raid_rebuild_test_sb_md_interleaved 00:33:27.258 ************************************ 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=1809716 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 1809716 /var/tmp/spdk-raid.sock 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 1809716 ']' 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:27.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:27.258 16:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:27.258 [2024-07-24 16:50:24.063730] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:33:27.258 [2024-07-24 16:50:24.063851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809716 ] 00:33:27.258 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:27.258 Zero copy mechanism will not be used. 00:33:27.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.516 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:27.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.516 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:27.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.516 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:27.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.516 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:27.516 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.516 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:27.517 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:27.517 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:27.517 [2024-07-24 16:50:24.288914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.775 [2024-07-24 16:50:24.581451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.342 [2024-07-24 16:50:24.918323] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:28.342 [2024-07-24 16:50:24.918359] bdev_raid.c:1442:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:28.342 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:28.342 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:28.342 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:28.342 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:28.600 BaseBdev1_malloc 00:33:28.600 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:28.858 [2024-07-24 16:50:25.565730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:28.858 [2024-07-24 16:50:25.565791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:28.858 [2024-07-24 16:50:25.565821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:33:28.858 [2024-07-24 16:50:25.565840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:28.858 [2024-07-24 16:50:25.568251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:28.858 [2024-07-24 16:50:25.568290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:28.858 BaseBdev1 00:33:28.858 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:28.858 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:29.116 BaseBdev2_malloc 00:33:29.116 16:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:29.417 [2024-07-24 16:50:26.070228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:29.417 [2024-07-24 16:50:26.070294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.417 [2024-07-24 16:50:26.070323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:33:29.417 [2024-07-24 16:50:26.070348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.417 [2024-07-24 16:50:26.072747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.417 [2024-07-24 16:50:26.072786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:29.417 BaseBdev2 00:33:29.417 16:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:29.697 spare_malloc 00:33:29.697 16:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:29.955 spare_delay 00:33:29.955 16:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:29.955 [2024-07-24 16:50:26.803572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:29.955 [2024-07-24 16:50:26.803633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.955 [2024-07-24 16:50:26.803664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:33:29.955 [2024-07-24 16:50:26.803683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.955 [2024-07-24 16:50:26.806134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.955 [2024-07-24 16:50:26.806178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:29.955 spare 00:33:30.214 16:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:33:30.214 [2024-07-24 16:50:27.028229] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:30.214 [2024-07-24 16:50:27.030590] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:30.214 [2024-07-24 16:50:27.030825] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000041a80 00:33:30.214 [2024-07-24 16:50:27.030848] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:30.214 [2024-07-24 16:50:27.030963] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:33:30.214 [2024-07-24 16:50:27.031104] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000041a80 00:33:30.214 [2024-07-24 16:50:27.031118] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000041a80 00:33:30.214 [2024-07-24 16:50:27.031231] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:30.214 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.472 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:30.472 "name": "raid_bdev1", 00:33:30.472 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:30.472 "strip_size_kb": 0, 00:33:30.472 "state": "online", 00:33:30.472 "raid_level": "raid1", 00:33:30.472 "superblock": true, 00:33:30.472 "num_base_bdevs": 2, 00:33:30.472 "num_base_bdevs_discovered": 2, 00:33:30.472 "num_base_bdevs_operational": 2, 00:33:30.472 "base_bdevs_list": [ 00:33:30.472 { 00:33:30.472 "name": "BaseBdev1", 00:33:30.472 "uuid": "72556889-d2f5-5258-be6c-3ff83b91e70e", 00:33:30.472 "is_configured": true, 00:33:30.472 "data_offset": 256, 00:33:30.472 "data_size": 7936 00:33:30.472 }, 00:33:30.472 { 00:33:30.472 "name": "BaseBdev2", 00:33:30.472 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:30.472 "is_configured": true, 00:33:30.472 "data_offset": 256, 00:33:30.472 "data_size": 7936 00:33:30.472 } 00:33:30.472 ] 00:33:30.472 }' 00:33:30.472 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:30.472 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:31.038 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:33:31.038 16:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:31.296 [2024-07-24 16:50:28.007372] bdev_raid.c:1119:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:31.296 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:33:31.296 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.296 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:31.576 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:33:31.576 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:33:31.576 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:33:31.576 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:31.835 [2024-07-24 16:50:28.452201] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:31.835 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:31.835 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:31.835 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:31.835 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:31.835 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.836 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.094 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.094 "name": "raid_bdev1", 00:33:32.094 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:32.094 "strip_size_kb": 0, 00:33:32.094 "state": "online", 00:33:32.094 "raid_level": "raid1", 00:33:32.094 "superblock": true, 00:33:32.094 "num_base_bdevs": 2, 00:33:32.094 "num_base_bdevs_discovered": 1, 00:33:32.094 "num_base_bdevs_operational": 1, 00:33:32.094 "base_bdevs_list": [ 00:33:32.094 { 00:33:32.094 "name": null, 00:33:32.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.094 "is_configured": false, 00:33:32.094 "data_offset": 256, 00:33:32.094 "data_size": 7936 00:33:32.094 }, 00:33:32.094 { 00:33:32.094 "name": "BaseBdev2", 00:33:32.094 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:32.094 "is_configured": true, 00:33:32.094 "data_offset": 256, 00:33:32.094 "data_size": 7936 00:33:32.094 } 00:33:32.094 ] 00:33:32.094 }' 00:33:32.094 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.094 16:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:32.659 16:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:32.659 [2024-07-24 16:50:29.414809] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.659 [2024-07-24 16:50:29.442035] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:33:32.659 [2024-07-24 16:50:29.444353] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:32.659 16:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:34.033 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:34.033 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:34.033 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:34.033 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:34.033 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:34.033 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:34.034 "name": "raid_bdev1", 00:33:34.034 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:34.034 "strip_size_kb": 0, 00:33:34.034 "state": "online", 00:33:34.034 "raid_level": "raid1", 00:33:34.034 "superblock": true, 00:33:34.034 "num_base_bdevs": 2, 00:33:34.034 "num_base_bdevs_discovered": 2, 00:33:34.034 "num_base_bdevs_operational": 2, 00:33:34.034 "process": { 00:33:34.034 "type": "rebuild", 00:33:34.034 "target": "spare", 00:33:34.034 "progress": { 00:33:34.034 "blocks": 3072, 00:33:34.034 "percent": 38 00:33:34.034 } 00:33:34.034 }, 00:33:34.034 "base_bdevs_list": [ 00:33:34.034 { 00:33:34.034 "name": "spare", 00:33:34.034 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:34.034 "is_configured": true, 00:33:34.034 "data_offset": 256, 00:33:34.034 "data_size": 7936 00:33:34.034 }, 00:33:34.034 { 00:33:34.034 "name": "BaseBdev2", 00:33:34.034 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:34.034 "is_configured": true, 00:33:34.034 "data_offset": 256, 00:33:34.034 "data_size": 7936 00:33:34.034 } 00:33:34.034 ] 00:33:34.034 }' 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.034 16:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:34.292 [2024-07-24 16:50:30.985203] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.292 [2024-07-24 16:50:31.057379] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:34.292 [2024-07-24 16:50:31.057442] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:34.292 [2024-07-24 16:50:31.057463] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:34.292 [2024-07-24 16:50:31.057478] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.292 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.550 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.550 "name": "raid_bdev1", 00:33:34.550 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:34.550 "strip_size_kb": 0, 00:33:34.550 "state": "online", 00:33:34.550 "raid_level": "raid1", 00:33:34.550 "superblock": true, 00:33:34.550 "num_base_bdevs": 2, 00:33:34.550 "num_base_bdevs_discovered": 1, 00:33:34.550 "num_base_bdevs_operational": 1, 00:33:34.550 "base_bdevs_list": [ 00:33:34.550 { 00:33:34.550 "name": null, 00:33:34.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.550 "is_configured": false, 00:33:34.550 "data_offset": 256, 00:33:34.550 "data_size": 7936 00:33:34.550 }, 00:33:34.550 { 00:33:34.550 "name": "BaseBdev2", 00:33:34.550 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:34.550 "is_configured": true, 00:33:34.550 "data_offset": 256, 00:33:34.550 "data_size": 7936 00:33:34.550 } 00:33:34.550 ] 00:33:34.550 }' 00:33:34.550 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.550 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.116 16:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.374 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:35.374 "name": "raid_bdev1", 00:33:35.374 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:35.374 "strip_size_kb": 0, 00:33:35.374 "state": "online", 00:33:35.374 "raid_level": "raid1", 00:33:35.374 "superblock": true, 00:33:35.374 "num_base_bdevs": 2, 00:33:35.374 "num_base_bdevs_discovered": 1, 00:33:35.374 "num_base_bdevs_operational": 1, 00:33:35.374 "base_bdevs_list": [ 00:33:35.374 { 00:33:35.374 "name": null, 00:33:35.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.374 "is_configured": false, 00:33:35.374 "data_offset": 256, 00:33:35.374 "data_size": 7936 00:33:35.374 }, 00:33:35.374 { 00:33:35.374 "name": "BaseBdev2", 00:33:35.374 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:35.374 "is_configured": true, 00:33:35.374 "data_offset": 256, 00:33:35.374 "data_size": 7936 00:33:35.374 } 00:33:35.374 ] 00:33:35.374 }' 00:33:35.374 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:35.374 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:35.374 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:35.374 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:35.374 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:35.632 [2024-07-24 16:50:32.441283] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:35.632 [2024-07-24 16:50:32.466646] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:33:35.632 [2024-07-24 16:50:32.468948] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:35.632 16:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.007 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:37.007 "name": "raid_bdev1", 00:33:37.007 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:37.007 "strip_size_kb": 0, 00:33:37.007 "state": "online", 00:33:37.007 "raid_level": "raid1", 00:33:37.007 "superblock": true, 00:33:37.007 "num_base_bdevs": 2, 00:33:37.007 "num_base_bdevs_discovered": 2, 00:33:37.007 "num_base_bdevs_operational": 2, 00:33:37.007 "process": { 00:33:37.007 "type": "rebuild", 00:33:37.007 "target": "spare", 00:33:37.007 "progress": { 00:33:37.007 "blocks": 3072, 00:33:37.007 "percent": 38 00:33:37.007 } 00:33:37.007 }, 00:33:37.007 "base_bdevs_list": [ 00:33:37.007 { 00:33:37.007 "name": "spare", 00:33:37.007 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:37.007 "is_configured": true, 00:33:37.007 "data_offset": 256, 00:33:37.007 "data_size": 7936 00:33:37.007 }, 00:33:37.007 { 00:33:37.007 "name": "BaseBdev2", 00:33:37.007 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:37.007 "is_configured": true, 00:33:37.008 "data_offset": 256, 00:33:37.008 "data_size": 7936 00:33:37.008 } 00:33:37.008 ] 00:33:37.008 }' 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:33:37.008 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1237 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:37.008 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:37.009 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:37.009 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:37.009 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:37.009 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:37.009 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.009 16:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.268 16:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:37.268 "name": "raid_bdev1", 00:33:37.268 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:37.268 "strip_size_kb": 0, 00:33:37.268 "state": "online", 00:33:37.268 "raid_level": "raid1", 00:33:37.268 "superblock": true, 00:33:37.268 "num_base_bdevs": 2, 00:33:37.268 "num_base_bdevs_discovered": 2, 00:33:37.268 "num_base_bdevs_operational": 2, 00:33:37.268 "process": { 00:33:37.268 "type": "rebuild", 00:33:37.268 "target": "spare", 00:33:37.268 "progress": { 00:33:37.268 "blocks": 3840, 00:33:37.268 "percent": 48 00:33:37.268 } 00:33:37.268 }, 00:33:37.268 "base_bdevs_list": [ 00:33:37.268 { 00:33:37.268 "name": "spare", 00:33:37.268 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:37.268 "is_configured": true, 00:33:37.268 "data_offset": 256, 00:33:37.268 "data_size": 7936 00:33:37.268 }, 00:33:37.268 { 00:33:37.268 "name": "BaseBdev2", 00:33:37.268 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:37.268 "is_configured": true, 00:33:37.268 "data_offset": 256, 00:33:37.268 "data_size": 7936 00:33:37.268 } 00:33:37.268 ] 00:33:37.268 }' 00:33:37.268 16:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:37.268 16:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:37.268 16:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:37.268 16:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:37.268 16:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:38.642 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:38.643 "name": "raid_bdev1", 00:33:38.643 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:38.643 "strip_size_kb": 0, 00:33:38.643 "state": "online", 00:33:38.643 "raid_level": "raid1", 00:33:38.643 "superblock": true, 00:33:38.643 "num_base_bdevs": 2, 00:33:38.643 "num_base_bdevs_discovered": 2, 00:33:38.643 "num_base_bdevs_operational": 2, 00:33:38.643 "process": { 00:33:38.643 "type": "rebuild", 00:33:38.643 "target": "spare", 00:33:38.643 "progress": { 00:33:38.643 "blocks": 7168, 00:33:38.643 "percent": 90 00:33:38.643 } 00:33:38.643 }, 00:33:38.643 "base_bdevs_list": [ 00:33:38.643 { 00:33:38.643 "name": "spare", 00:33:38.643 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:38.643 "is_configured": true, 00:33:38.643 "data_offset": 256, 00:33:38.643 "data_size": 7936 00:33:38.643 }, 00:33:38.643 { 00:33:38.643 "name": "BaseBdev2", 00:33:38.643 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:38.643 "is_configured": true, 00:33:38.643 "data_offset": 256, 00:33:38.643 "data_size": 7936 00:33:38.643 } 00:33:38.643 ] 00:33:38.643 }' 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:38.643 16:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:38.901 [2024-07-24 16:50:35.594026] bdev_raid.c:2870:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:38.901 [2024-07-24 16:50:35.594104] bdev_raid.c:2532:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:38.901 [2024-07-24 16:50:35.594220] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.837 "name": "raid_bdev1", 00:33:39.837 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:39.837 "strip_size_kb": 0, 00:33:39.837 "state": "online", 00:33:39.837 "raid_level": "raid1", 00:33:39.837 "superblock": true, 00:33:39.837 "num_base_bdevs": 2, 00:33:39.837 "num_base_bdevs_discovered": 2, 00:33:39.837 "num_base_bdevs_operational": 2, 00:33:39.837 "base_bdevs_list": [ 00:33:39.837 { 00:33:39.837 "name": "spare", 00:33:39.837 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:39.837 "is_configured": true, 00:33:39.837 "data_offset": 256, 00:33:39.837 "data_size": 7936 00:33:39.837 }, 00:33:39.837 { 00:33:39.837 "name": "BaseBdev2", 00:33:39.837 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:39.837 "is_configured": true, 00:33:39.837 "data_offset": 256, 00:33:39.837 "data_size": 7936 00:33:39.837 } 00:33:39.837 ] 00:33:39.837 }' 00:33:39.837 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.095 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.354 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:40.354 "name": "raid_bdev1", 00:33:40.354 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:40.354 "strip_size_kb": 0, 00:33:40.354 "state": "online", 00:33:40.354 "raid_level": "raid1", 00:33:40.354 "superblock": true, 00:33:40.354 "num_base_bdevs": 2, 00:33:40.354 "num_base_bdevs_discovered": 2, 00:33:40.354 "num_base_bdevs_operational": 2, 00:33:40.354 "base_bdevs_list": [ 00:33:40.354 { 00:33:40.354 "name": "spare", 00:33:40.354 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:40.354 "is_configured": true, 00:33:40.354 "data_offset": 256, 00:33:40.354 "data_size": 7936 00:33:40.354 }, 00:33:40.354 { 00:33:40.354 "name": "BaseBdev2", 00:33:40.354 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:40.354 "is_configured": true, 00:33:40.354 "data_offset": 256, 00:33:40.354 "data_size": 7936 00:33:40.354 } 00:33:40.354 ] 00:33:40.354 }' 00:33:40.354 16:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.354 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.613 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.613 "name": "raid_bdev1", 00:33:40.613 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:40.613 "strip_size_kb": 0, 00:33:40.613 "state": "online", 00:33:40.613 "raid_level": "raid1", 00:33:40.613 "superblock": true, 00:33:40.613 "num_base_bdevs": 2, 00:33:40.613 "num_base_bdevs_discovered": 2, 00:33:40.613 "num_base_bdevs_operational": 2, 00:33:40.613 "base_bdevs_list": [ 00:33:40.613 { 00:33:40.613 "name": "spare", 00:33:40.613 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:40.613 "is_configured": true, 00:33:40.613 "data_offset": 256, 00:33:40.613 "data_size": 7936 00:33:40.613 }, 00:33:40.613 { 00:33:40.613 "name": "BaseBdev2", 00:33:40.613 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:40.613 "is_configured": true, 00:33:40.613 "data_offset": 256, 00:33:40.613 "data_size": 7936 00:33:40.613 } 00:33:40.613 ] 00:33:40.613 }' 00:33:40.613 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.613 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.179 16:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:41.746 [2024-07-24 16:50:38.304640] bdev_raid.c:2382:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:41.746 [2024-07-24 16:50:38.304676] bdev_raid.c:1870:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:41.746 [2024-07-24 16:50:38.304766] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:41.746 [2024-07-24 16:50:38.304844] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:41.747 [2024-07-24 16:50:38.304861] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000041a80 name raid_bdev1, state offline 00:33:41.747 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.747 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:33:41.747 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:33:41.747 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:33:41.747 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:33:41.747 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:42.005 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:42.264 [2024-07-24 16:50:38.922263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:42.264 [2024-07-24 16:50:38.922329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:42.264 [2024-07-24 16:50:38.922359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:33:42.264 [2024-07-24 16:50:38.922374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:42.264 [2024-07-24 16:50:38.924866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:42.264 [2024-07-24 16:50:38.924900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:42.264 [2024-07-24 16:50:38.924978] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:42.264 [2024-07-24 16:50:38.925055] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:42.264 [2024-07-24 16:50:38.925208] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:42.264 spare 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.264 16:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.264 [2024-07-24 16:50:39.025553] bdev_raid.c:1720:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000042c80 00:33:42.264 [2024-07-24 16:50:39.025590] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:42.264 [2024-07-24 16:50:39.025703] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:33:42.264 [2024-07-24 16:50:39.025869] bdev_raid.c:1750:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000042c80 00:33:42.264 [2024-07-24 16:50:39.025884] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000042c80 00:33:42.264 [2024-07-24 16:50:39.025989] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.830 16:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:42.830 "name": "raid_bdev1", 00:33:42.830 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:42.830 "strip_size_kb": 0, 00:33:42.830 "state": "online", 00:33:42.830 "raid_level": "raid1", 00:33:42.830 "superblock": true, 00:33:42.830 "num_base_bdevs": 2, 00:33:42.830 "num_base_bdevs_discovered": 2, 00:33:42.830 "num_base_bdevs_operational": 2, 00:33:42.830 "base_bdevs_list": [ 00:33:42.830 { 00:33:42.830 "name": "spare", 00:33:42.830 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:42.830 "is_configured": true, 00:33:42.830 "data_offset": 256, 00:33:42.830 "data_size": 7936 00:33:42.830 }, 00:33:42.830 { 00:33:42.830 "name": "BaseBdev2", 00:33:42.830 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:42.830 "is_configured": true, 00:33:42.830 "data_offset": 256, 00:33:42.830 "data_size": 7936 00:33:42.830 } 00:33:42.830 ] 00:33:42.830 }' 00:33:42.831 16:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:42.831 16:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.397 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:43.656 "name": "raid_bdev1", 00:33:43.656 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:43.656 "strip_size_kb": 0, 00:33:43.656 "state": "online", 00:33:43.656 "raid_level": "raid1", 00:33:43.656 "superblock": true, 00:33:43.656 "num_base_bdevs": 2, 00:33:43.656 "num_base_bdevs_discovered": 2, 00:33:43.656 "num_base_bdevs_operational": 2, 00:33:43.656 "base_bdevs_list": [ 00:33:43.656 { 00:33:43.656 "name": "spare", 00:33:43.656 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:43.656 "is_configured": true, 00:33:43.656 "data_offset": 256, 00:33:43.656 "data_size": 7936 00:33:43.656 }, 00:33:43.656 { 00:33:43.656 "name": "BaseBdev2", 00:33:43.656 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:43.656 "is_configured": true, 00:33:43.656 "data_offset": 256, 00:33:43.656 "data_size": 7936 00:33:43.656 } 00:33:43.656 ] 00:33:43.656 }' 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.656 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:43.914 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:33:43.915 16:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:44.481 [2024-07-24 16:50:41.060172] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:44.481 "name": "raid_bdev1", 00:33:44.481 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:44.481 "strip_size_kb": 0, 00:33:44.481 "state": "online", 00:33:44.481 "raid_level": "raid1", 00:33:44.481 "superblock": true, 00:33:44.481 "num_base_bdevs": 2, 00:33:44.481 "num_base_bdevs_discovered": 1, 00:33:44.481 "num_base_bdevs_operational": 1, 00:33:44.481 "base_bdevs_list": [ 00:33:44.481 { 00:33:44.481 "name": null, 00:33:44.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.481 "is_configured": false, 00:33:44.481 "data_offset": 256, 00:33:44.481 "data_size": 7936 00:33:44.481 }, 00:33:44.481 { 00:33:44.481 "name": "BaseBdev2", 00:33:44.481 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:44.481 "is_configured": true, 00:33:44.481 "data_offset": 256, 00:33:44.481 "data_size": 7936 00:33:44.481 } 00:33:44.481 ] 00:33:44.481 }' 00:33:44.481 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:44.482 16:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:45.454 16:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:45.713 [2024-07-24 16:50:42.355689] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:45.713 [2024-07-24 16:50:42.355888] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:45.713 [2024-07-24 16:50:42.355912] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:45.713 [2024-07-24 16:50:42.355956] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:45.713 [2024-07-24 16:50:42.378794] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:33:45.713 [2024-07-24 16:50:42.381174] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:45.713 16:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.655 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.916 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:46.916 "name": "raid_bdev1", 00:33:46.916 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:46.916 "strip_size_kb": 0, 00:33:46.916 "state": "online", 00:33:46.916 "raid_level": "raid1", 00:33:46.916 "superblock": true, 00:33:46.916 "num_base_bdevs": 2, 00:33:46.916 "num_base_bdevs_discovered": 2, 00:33:46.916 "num_base_bdevs_operational": 2, 00:33:46.916 "process": { 00:33:46.916 "type": "rebuild", 00:33:46.916 "target": "spare", 00:33:46.916 "progress": { 00:33:46.916 "blocks": 3072, 00:33:46.916 "percent": 38 00:33:46.916 } 00:33:46.916 }, 00:33:46.916 "base_bdevs_list": [ 00:33:46.916 { 00:33:46.916 "name": "spare", 00:33:46.916 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:46.916 "is_configured": true, 00:33:46.916 "data_offset": 256, 00:33:46.916 "data_size": 7936 00:33:46.916 }, 00:33:46.916 { 00:33:46.916 "name": "BaseBdev2", 00:33:46.916 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:46.916 "is_configured": true, 00:33:46.916 "data_offset": 256, 00:33:46.916 "data_size": 7936 00:33:46.916 } 00:33:46.916 ] 00:33:46.916 }' 00:33:46.916 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:46.916 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:46.916 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:46.916 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:46.916 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:47.175 [2024-07-24 16:50:43.878033] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:47.175 [2024-07-24 16:50:43.893400] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:47.175 [2024-07-24 16:50:43.893464] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.175 [2024-07-24 16:50:43.893485] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:47.175 [2024-07-24 16:50:43.893503] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.175 16:50:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.434 16:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:47.434 "name": "raid_bdev1", 00:33:47.434 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:47.434 "strip_size_kb": 0, 00:33:47.434 "state": "online", 00:33:47.434 "raid_level": "raid1", 00:33:47.434 "superblock": true, 00:33:47.434 "num_base_bdevs": 2, 00:33:47.434 "num_base_bdevs_discovered": 1, 00:33:47.434 "num_base_bdevs_operational": 1, 00:33:47.434 "base_bdevs_list": [ 00:33:47.434 { 00:33:47.434 "name": null, 00:33:47.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.434 "is_configured": false, 00:33:47.434 "data_offset": 256, 00:33:47.434 "data_size": 7936 00:33:47.434 }, 00:33:47.434 { 00:33:47.434 "name": "BaseBdev2", 00:33:47.434 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:47.434 "is_configured": true, 00:33:47.434 "data_offset": 256, 00:33:47.434 "data_size": 7936 00:33:47.434 } 00:33:47.434 ] 00:33:47.434 }' 00:33:47.434 16:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:47.434 16:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.002 16:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:48.262 [2024-07-24 16:50:44.960946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:48.262 [2024-07-24 16:50:44.961010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:48.262 [2024-07-24 16:50:44.961038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:33:48.262 [2024-07-24 16:50:44.961056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:48.262 [2024-07-24 16:50:44.961328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:48.262 [2024-07-24 16:50:44.961351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:48.262 [2024-07-24 16:50:44.961423] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:48.262 [2024-07-24 16:50:44.961443] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:48.262 [2024-07-24 16:50:44.961458] bdev_raid.c:3712:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:48.262 [2024-07-24 16:50:44.961490] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:48.262 [2024-07-24 16:50:44.986258] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:33:48.262 spare 00:33:48.262 [2024-07-24 16:50:44.988570] bdev_raid.c:2905:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:48.262 16:50:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.199 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.458 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:49.458 "name": "raid_bdev1", 00:33:49.458 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:49.458 "strip_size_kb": 0, 00:33:49.458 "state": "online", 00:33:49.458 "raid_level": "raid1", 00:33:49.458 "superblock": true, 00:33:49.458 "num_base_bdevs": 2, 00:33:49.458 "num_base_bdevs_discovered": 2, 00:33:49.458 "num_base_bdevs_operational": 2, 00:33:49.458 "process": { 00:33:49.458 "type": "rebuild", 00:33:49.458 "target": "spare", 00:33:49.458 "progress": { 00:33:49.458 "blocks": 3072, 00:33:49.458 "percent": 38 00:33:49.458 } 00:33:49.458 }, 00:33:49.458 "base_bdevs_list": [ 00:33:49.458 { 00:33:49.458 "name": "spare", 00:33:49.458 "uuid": "2ef7fdbd-814f-5370-b9ea-04d35a109370", 00:33:49.458 "is_configured": true, 00:33:49.458 "data_offset": 256, 00:33:49.458 "data_size": 7936 00:33:49.458 }, 00:33:49.458 { 00:33:49.458 "name": "BaseBdev2", 00:33:49.458 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:49.458 "is_configured": true, 00:33:49.458 "data_offset": 256, 00:33:49.458 "data_size": 7936 00:33:49.458 } 00:33:49.458 ] 00:33:49.458 }' 00:33:49.458 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:49.458 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:49.458 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:49.717 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:49.717 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:49.717 [2024-07-24 16:50:46.550071] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:49.976 [2024-07-24 16:50:46.601634] bdev_raid.c:2541:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:49.976 [2024-07-24 16:50:46.601695] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:49.976 [2024-07-24 16:50:46.601722] bdev_raid.c:2146:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:49.976 [2024-07-24 16:50:46.601734] bdev_raid.c:2479:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.976 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:50.235 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:50.235 "name": "raid_bdev1", 00:33:50.235 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:50.235 "strip_size_kb": 0, 00:33:50.235 "state": "online", 00:33:50.235 "raid_level": "raid1", 00:33:50.235 "superblock": true, 00:33:50.235 "num_base_bdevs": 2, 00:33:50.235 "num_base_bdevs_discovered": 1, 00:33:50.235 "num_base_bdevs_operational": 1, 00:33:50.235 "base_bdevs_list": [ 00:33:50.235 { 00:33:50.235 "name": null, 00:33:50.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.235 "is_configured": false, 00:33:50.235 "data_offset": 256, 00:33:50.235 "data_size": 7936 00:33:50.235 }, 00:33:50.235 { 00:33:50.235 "name": "BaseBdev2", 00:33:50.235 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:50.235 "is_configured": true, 00:33:50.235 "data_offset": 256, 00:33:50.235 "data_size": 7936 00:33:50.235 } 00:33:50.235 ] 00:33:50.235 }' 00:33:50.235 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:50.235 16:50:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.812 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.380 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:51.380 "name": "raid_bdev1", 00:33:51.380 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:51.380 "strip_size_kb": 0, 00:33:51.380 "state": "online", 00:33:51.380 "raid_level": "raid1", 00:33:51.380 "superblock": true, 00:33:51.380 "num_base_bdevs": 2, 00:33:51.380 "num_base_bdevs_discovered": 1, 00:33:51.380 "num_base_bdevs_operational": 1, 00:33:51.380 "base_bdevs_list": [ 00:33:51.380 { 00:33:51.380 "name": null, 00:33:51.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.380 "is_configured": false, 00:33:51.380 "data_offset": 256, 00:33:51.380 "data_size": 7936 00:33:51.380 }, 00:33:51.380 { 00:33:51.380 "name": "BaseBdev2", 00:33:51.380 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:51.380 "is_configured": true, 00:33:51.380 "data_offset": 256, 00:33:51.380 "data_size": 7936 00:33:51.380 } 00:33:51.380 ] 00:33:51.380 }' 00:33:51.380 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:51.380 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:51.380 16:50:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:51.380 16:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:51.380 16:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:51.640 16:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:51.640 [2024-07-24 16:50:48.396333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:51.640 [2024-07-24 16:50:48.396390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.640 [2024-07-24 16:50:48.396422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:33:51.640 [2024-07-24 16:50:48.396437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.640 [2024-07-24 16:50:48.396659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.640 [2024-07-24 16:50:48.396679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:51.640 [2024-07-24 16:50:48.396739] bdev_raid.c:3849:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:51.640 [2024-07-24 16:50:48.396757] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:51.640 [2024-07-24 16:50:48.396772] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:51.640 BaseBdev1 00:33:51.640 16:50:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.577 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.836 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:52.836 "name": "raid_bdev1", 00:33:52.836 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:52.836 "strip_size_kb": 0, 00:33:52.836 "state": "online", 00:33:52.836 "raid_level": "raid1", 00:33:52.836 "superblock": true, 00:33:52.836 "num_base_bdevs": 2, 00:33:52.836 "num_base_bdevs_discovered": 1, 00:33:52.836 "num_base_bdevs_operational": 1, 00:33:52.836 "base_bdevs_list": [ 00:33:52.836 { 00:33:52.836 "name": null, 00:33:52.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.836 "is_configured": false, 00:33:52.836 "data_offset": 256, 00:33:52.836 "data_size": 7936 00:33:52.836 }, 00:33:52.836 { 00:33:52.836 "name": "BaseBdev2", 00:33:52.836 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:52.836 "is_configured": true, 00:33:52.836 "data_offset": 256, 00:33:52.836 "data_size": 7936 00:33:52.836 } 00:33:52.836 ] 00:33:52.836 }' 00:33:52.836 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:52.836 16:50:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.773 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:54.033 "name": "raid_bdev1", 00:33:54.033 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:54.033 "strip_size_kb": 0, 00:33:54.033 "state": "online", 00:33:54.033 "raid_level": "raid1", 00:33:54.033 "superblock": true, 00:33:54.033 "num_base_bdevs": 2, 00:33:54.033 "num_base_bdevs_discovered": 1, 00:33:54.033 "num_base_bdevs_operational": 1, 00:33:54.033 "base_bdevs_list": [ 00:33:54.033 { 00:33:54.033 "name": null, 00:33:54.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.033 "is_configured": false, 00:33:54.033 "data_offset": 256, 00:33:54.033 "data_size": 7936 00:33:54.033 }, 00:33:54.033 { 00:33:54.033 "name": "BaseBdev2", 00:33:54.033 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:54.033 "is_configured": true, 00:33:54.033 "data_offset": 256, 00:33:54.033 "data_size": 7936 00:33:54.033 } 00:33:54.033 ] 00:33:54.033 }' 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:54.033 16:50:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:54.303 [2024-07-24 16:50:51.047543] bdev_raid.c:3288:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:54.303 [2024-07-24 16:50:51.047709] bdev_raid.c:3654:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:54.303 [2024-07-24 16:50:51.047728] bdev_raid.c:3673:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:54.303 request: 00:33:54.303 { 00:33:54.303 "base_bdev": "BaseBdev1", 00:33:54.303 "raid_bdev": "raid_bdev1", 00:33:54.303 "method": "bdev_raid_add_base_bdev", 00:33:54.303 "req_id": 1 00:33:54.303 } 00:33:54.303 Got JSON-RPC error response 00:33:54.303 response: 00:33:54.303 { 00:33:54.303 "code": -22, 00:33:54.303 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:54.303 } 00:33:54.303 16:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:54.303 16:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:54.303 16:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:54.303 16:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:54.303 16:50:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.239 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.497 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:55.497 "name": "raid_bdev1", 00:33:55.497 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:55.497 "strip_size_kb": 0, 00:33:55.497 "state": "online", 00:33:55.497 "raid_level": "raid1", 00:33:55.497 "superblock": true, 00:33:55.497 "num_base_bdevs": 2, 00:33:55.497 "num_base_bdevs_discovered": 1, 00:33:55.497 "num_base_bdevs_operational": 1, 00:33:55.497 "base_bdevs_list": [ 00:33:55.497 { 00:33:55.497 "name": null, 00:33:55.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.497 "is_configured": false, 00:33:55.497 "data_offset": 256, 00:33:55.497 "data_size": 7936 00:33:55.497 }, 00:33:55.497 { 00:33:55.497 "name": "BaseBdev2", 00:33:55.497 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:55.497 "is_configured": true, 00:33:55.497 "data_offset": 256, 00:33:55.497 "data_size": 7936 00:33:55.497 } 00:33:55.497 ] 00:33:55.497 }' 00:33:55.497 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:55.497 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.065 16:50:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:56.325 "name": "raid_bdev1", 00:33:56.325 "uuid": "3f81871d-d3ad-45b5-85b7-adddb681dad5", 00:33:56.325 "strip_size_kb": 0, 00:33:56.325 "state": "online", 00:33:56.325 "raid_level": "raid1", 00:33:56.325 "superblock": true, 00:33:56.325 "num_base_bdevs": 2, 00:33:56.325 "num_base_bdevs_discovered": 1, 00:33:56.325 "num_base_bdevs_operational": 1, 00:33:56.325 "base_bdevs_list": [ 00:33:56.325 { 00:33:56.325 "name": null, 00:33:56.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.325 "is_configured": false, 00:33:56.325 "data_offset": 256, 00:33:56.325 "data_size": 7936 00:33:56.325 }, 00:33:56.325 { 00:33:56.325 "name": "BaseBdev2", 00:33:56.325 "uuid": "9813f787-0fa9-5d16-a3bc-df409d5658a3", 00:33:56.325 "is_configured": true, 00:33:56.325 "data_offset": 256, 00:33:56.325 "data_size": 7936 00:33:56.325 } 00:33:56.325 ] 00:33:56.325 }' 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 1809716 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 1809716 ']' 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 1809716 00:33:56.325 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:56.326 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:56.326 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1809716 00:33:56.585 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:56.585 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:56.585 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1809716' 00:33:56.585 killing process with pid 1809716 00:33:56.585 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 1809716 00:33:56.585 Received shutdown signal, test time was about 60.000000 seconds 00:33:56.585 00:33:56.585 Latency(us) 00:33:56.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.585 =================================================================================================================== 00:33:56.585 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:56.585 [2024-07-24 16:50:53.233478] bdev_raid.c:1373:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:56.586 [2024-07-24 16:50:53.233617] bdev_raid.c: 486:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:56.586 16:50:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 1809716 00:33:56.586 [2024-07-24 16:50:53.233677] bdev_raid.c: 463:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:56.586 [2024-07-24 16:50:53.233693] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000042c80 name raid_bdev1, state offline 00:33:56.845 [2024-07-24 16:50:53.550520] bdev_raid.c:1399:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:58.779 16:50:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:33:58.779 00:33:58.779 real 0m31.323s 00:33:58.779 user 0m48.365s 00:33:58.779 sys 0m3.948s 00:33:58.779 16:50:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:58.779 16:50:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:58.779 ************************************ 00:33:58.779 END TEST raid_rebuild_test_sb_md_interleaved 00:33:58.779 ************************************ 00:33:58.779 16:50:55 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:33:58.779 16:50:55 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:33:58.779 16:50:55 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 1809716 ']' 00:33:58.779 16:50:55 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 1809716 00:33:58.779 16:50:55 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:33:58.779 00:33:58.779 real 20m29.112s 00:33:58.779 user 32m31.987s 00:33:58.779 sys 3m27.330s 00:33:58.779 16:50:55 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:58.779 16:50:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:58.779 ************************************ 00:33:58.779 END TEST bdev_raid 00:33:58.779 ************************************ 00:33:58.780 16:50:55 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:33:58.780 16:50:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:58.780 16:50:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:58.780 16:50:55 -- common/autotest_common.sh@10 -- # set +x 00:33:58.780 ************************************ 00:33:58.780 START TEST bdevperf_config 00:33:58.780 ************************************ 00:33:58.780 16:50:55 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:33:58.780 * Looking for test storage... 00:33:58.780 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:58.780 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:58.780 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:58.780 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:58.780 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:33:58.780 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:33:58.780 16:50:55 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:05.387 16:51:00 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-24 16:50:55.698442] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:05.387 [2024-07-24 16:50:55.698562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815450 ] 00:34:05.387 Using job config with 4 jobs 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:05.387 [2024-07-24 16:50:55.946601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.387 [2024-07-24 16:50:56.243954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.387 cpumask for '\''job0'\'' is too big 00:34:05.387 cpumask for '\''job1'\'' is too big 00:34:05.387 cpumask for '\''job2'\'' is too big 00:34:05.387 cpumask for '\''job3'\'' is too big 00:34:05.387 Running I/O for 2 seconds... 00:34:05.387 00:34:05.387 Latency(us) 00:34:05.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.387 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.387 Malloc0 : 2.02 23414.60 22.87 0.00 0.00 10921.14 1979.19 17196.65 00:34:05.387 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.387 Malloc0 : 2.02 23393.48 22.85 0.00 0.00 10906.51 1939.87 15204.35 00:34:05.387 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.387 Malloc0 : 2.03 23372.60 22.82 0.00 0.00 10889.57 1952.97 13212.06 00:34:05.387 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.387 Malloc0 : 2.03 23351.73 22.80 0.00 0.00 10872.66 1952.97 11272.19 00:34:05.387 =================================================================================================================== 00:34:05.387 Total : 93532.41 91.34 0.00 0.00 10897.47 1939.87 17196.65' 00:34:05.387 16:51:00 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-24 16:50:55.698442] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:05.387 [2024-07-24 16:50:55.698562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815450 ] 00:34:05.387 Using job config with 4 jobs 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:05.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.387 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:05.388 [2024-07-24 16:50:55.946601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.388 [2024-07-24 16:50:56.243954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.388 cpumask for '\''job0'\'' is too big 00:34:05.388 cpumask for '\''job1'\'' is too big 00:34:05.388 cpumask for '\''job2'\'' is too big 00:34:05.388 cpumask for '\''job3'\'' is too big 00:34:05.388 Running I/O for 2 seconds... 00:34:05.388 00:34:05.388 Latency(us) 00:34:05.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.388 Malloc0 : 2.02 23414.60 22.87 0.00 0.00 10921.14 1979.19 17196.65 00:34:05.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.388 Malloc0 : 2.02 23393.48 22.85 0.00 0.00 10906.51 1939.87 15204.35 00:34:05.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.388 Malloc0 : 2.03 23372.60 22.82 0.00 0.00 10889.57 1952.97 13212.06 00:34:05.388 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.388 Malloc0 : 2.03 23351.73 22.80 0.00 0.00 10872.66 1952.97 11272.19 00:34:05.388 =================================================================================================================== 00:34:05.388 Total : 93532.41 91.34 0.00 0.00 10897.47 1939.87 17196.65' 00:34:05.388 16:51:00 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-24 16:50:55.698442] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:05.388 [2024-07-24 16:50:55.698562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815450 ] 00:34:05.388 Using job config with 4 jobs 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:05.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.388 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:05.388 [2024-07-24 16:50:55.946601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.388 [2024-07-24 16:50:56.243954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.388 cpumask for '\''job0'\'' is too big 00:34:05.388 cpumask for '\''job1'\'' is too big 00:34:05.388 cpumask for '\''job2'\'' is too big 00:34:05.388 cpumask for '\''job3'\'' is too big 00:34:05.388 Running I/O for 2 seconds... 00:34:05.388 00:34:05.388 Latency(us) 00:34:05.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.389 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.389 Malloc0 : 2.02 23414.60 22.87 0.00 0.00 10921.14 1979.19 17196.65 00:34:05.389 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.389 Malloc0 : 2.02 23393.48 22.85 0.00 0.00 10906.51 1939.87 15204.35 00:34:05.389 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.389 Malloc0 : 2.03 23372.60 22.82 0.00 0.00 10889.57 1952.97 13212.06 00:34:05.389 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:05.389 Malloc0 : 2.03 23351.73 22.80 0.00 0.00 10872.66 1952.97 11272.19 00:34:05.389 =================================================================================================================== 00:34:05.389 Total : 93532.41 91.34 0.00 0.00 10897.47 1939.87 17196.65' 00:34:05.389 16:51:00 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:05.389 16:51:00 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:05.389 16:51:01 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:34:05.389 16:51:01 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:05.389 [2024-07-24 16:51:01.108110] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:05.389 [2024-07-24 16:51:01.108245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816315 ] 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:05.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:05.389 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:05.389 [2024-07-24 16:51:01.353617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.389 [2024-07-24 16:51:01.661028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.648 cpumask for 'job0' is too big 00:34:05.648 cpumask for 'job1' is too big 00:34:05.648 cpumask for 'job2' is too big 00:34:05.648 cpumask for 'job3' is too big 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:34:09.842 Running I/O for 2 seconds... 00:34:09.842 00:34:09.842 Latency(us) 00:34:09.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.842 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:09.842 Malloc0 : 2.02 23365.82 22.82 0.00 0.00 10942.01 1992.29 17196.65 00:34:09.842 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:09.842 Malloc0 : 2.02 23344.65 22.80 0.00 0.00 10926.79 1939.87 15204.35 00:34:09.842 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:09.842 Malloc0 : 2.03 23386.42 22.84 0.00 0.00 10880.33 1952.97 13159.63 00:34:09.842 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:09.842 Malloc0 : 2.03 23365.48 22.82 0.00 0.00 10865.00 1952.97 11324.62 00:34:09.842 =================================================================================================================== 00:34:09.842 Total : 93462.37 91.27 0.00 0.00 10903.45 1939.87 17196.65' 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:09.842 16:51:06 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:09.843 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:09.843 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:09.843 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:09.843 16:51:06 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:15.116 16:51:11 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-24 16:51:06.469147] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:15.116 [2024-07-24 16:51:06.469255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817192 ] 00:34:15.116 Using job config with 3 jobs 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.116 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:15.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:15.117 [2024-07-24 16:51:06.682942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.117 [2024-07-24 16:51:06.990184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.117 cpumask for '\''job0'\'' is too big 00:34:15.117 cpumask for '\''job1'\'' is too big 00:34:15.117 cpumask for '\''job2'\'' is too big 00:34:15.117 Running I/O for 2 seconds... 00:34:15.117 00:34:15.117 Latency(us) 00:34:15.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.117 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.117 Malloc0 : 2.01 31663.54 30.92 0.00 0.00 8079.58 1900.54 12058.62 00:34:15.117 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.117 Malloc0 : 2.02 31634.72 30.89 0.00 0.00 8067.29 1887.44 10171.19 00:34:15.117 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.117 Malloc0 : 2.02 31606.19 30.87 0.00 0.00 8056.28 1887.44 8388.61 00:34:15.117 =================================================================================================================== 00:34:15.117 Total : 94904.46 92.68 0.00 0.00 8067.72 1887.44 12058.62' 00:34:15.117 16:51:11 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-24 16:51:06.469147] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:15.117 [2024-07-24 16:51:06.469255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817192 ] 00:34:15.117 Using job config with 3 jobs 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:15.117 [2024-07-24 16:51:06.682942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.117 [2024-07-24 16:51:06.990184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.117 cpumask for '\''job0'\'' is too big 00:34:15.117 cpumask for '\''job1'\'' is too big 00:34:15.117 cpumask for '\''job2'\'' is too big 00:34:15.117 Running I/O for 2 seconds... 00:34:15.117 00:34:15.117 Latency(us) 00:34:15.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.117 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.117 Malloc0 : 2.01 31663.54 30.92 0.00 0.00 8079.58 1900.54 12058.62 00:34:15.117 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.117 Malloc0 : 2.02 31634.72 30.89 0.00 0.00 8067.29 1887.44 10171.19 00:34:15.117 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.117 Malloc0 : 2.02 31606.19 30.87 0.00 0.00 8056.28 1887.44 8388.61 00:34:15.117 =================================================================================================================== 00:34:15.117 Total : 94904.46 92.68 0.00 0.00 8067.72 1887.44 12058.62' 00:34:15.117 16:51:11 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-24 16:51:06.469147] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:15.117 [2024-07-24 16:51:06.469255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817192 ] 00:34:15.117 Using job config with 3 jobs 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.117 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:15.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:15.118 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.118 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:15.118 [2024-07-24 16:51:06.682942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.118 [2024-07-24 16:51:06.990184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.118 cpumask for '\''job0'\'' is too big 00:34:15.118 cpumask for '\''job1'\'' is too big 00:34:15.118 cpumask for '\''job2'\'' is too big 00:34:15.118 Running I/O for 2 seconds... 00:34:15.118 00:34:15.118 Latency(us) 00:34:15.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.118 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.118 Malloc0 : 2.01 31663.54 30.92 0.00 0.00 8079.58 1900.54 12058.62 00:34:15.118 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.118 Malloc0 : 2.02 31634.72 30.89 0.00 0.00 8067.29 1887.44 10171.19 00:34:15.118 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:15.118 Malloc0 : 2.02 31606.19 30.87 0.00 0.00 8056.28 1887.44 8388.61 00:34:15.118 =================================================================================================================== 00:34:15.118 Total : 94904.46 92.68 0.00 0.00 8067.72 1887.44 12058.62' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:15.118 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:15.118 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:15.118 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:15.118 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:15.118 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:15.118 16:51:11 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:20.386 16:51:17 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-24 16:51:11.781016] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:20.386 [2024-07-24 16:51:11.781133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818616 ] 00:34:20.386 Using job config with 4 jobs 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:20.386 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.386 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:20.386 [2024-07-24 16:51:12.019484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.386 [2024-07-24 16:51:12.323761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.386 cpumask for '\''job0'\'' is too big 00:34:20.386 cpumask for '\''job1'\'' is too big 00:34:20.386 cpumask for '\''job2'\'' is too big 00:34:20.386 cpumask for '\''job3'\'' is too big 00:34:20.386 Running I/O for 2 seconds... 00:34:20.386 00:34:20.386 Latency(us) 00:34:20.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.386 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.386 Malloc0 : 2.04 11677.78 11.40 0.00 0.00 21900.60 4089.45 34603.01 00:34:20.386 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.04 11666.69 11.39 0.00 0.00 21902.39 4928.31 34603.01 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.04 11656.30 11.38 0.00 0.00 21838.07 4063.23 30408.70 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.04 11645.32 11.37 0.00 0.00 21836.64 4875.88 30408.70 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.05 11634.99 11.36 0.00 0.00 21776.88 4037.02 26214.40 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.05 11624.06 11.35 0.00 0.00 21774.12 4875.88 26214.40 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.05 11613.70 11.34 0.00 0.00 21710.82 4010.80 22334.67 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.05 11602.84 11.33 0.00 0.00 21708.94 4823.45 22334.67 00:34:20.387 =================================================================================================================== 00:34:20.387 Total : 93121.70 90.94 0.00 0.00 21806.06 4010.80 34603.01' 00:34:20.387 16:51:17 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-24 16:51:11.781016] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:20.387 [2024-07-24 16:51:11.781133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818616 ] 00:34:20.387 Using job config with 4 jobs 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:20.387 [2024-07-24 16:51:12.019484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.387 [2024-07-24 16:51:12.323761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.387 cpumask for '\''job0'\'' is too big 00:34:20.387 cpumask for '\''job1'\'' is too big 00:34:20.387 cpumask for '\''job2'\'' is too big 00:34:20.387 cpumask for '\''job3'\'' is too big 00:34:20.387 Running I/O for 2 seconds... 00:34:20.387 00:34:20.387 Latency(us) 00:34:20.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.04 11677.78 11.40 0.00 0.00 21900.60 4089.45 34603.01 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.04 11666.69 11.39 0.00 0.00 21902.39 4928.31 34603.01 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.04 11656.30 11.38 0.00 0.00 21838.07 4063.23 30408.70 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.04 11645.32 11.37 0.00 0.00 21836.64 4875.88 30408.70 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.05 11634.99 11.36 0.00 0.00 21776.88 4037.02 26214.40 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.05 11624.06 11.35 0.00 0.00 21774.12 4875.88 26214.40 00:34:20.387 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc0 : 2.05 11613.70 11.34 0.00 0.00 21710.82 4010.80 22334.67 00:34:20.387 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.387 Malloc1 : 2.05 11602.84 11.33 0.00 0.00 21708.94 4823.45 22334.67 00:34:20.387 =================================================================================================================== 00:34:20.387 Total : 93121.70 90.94 0.00 0.00 21806.06 4010.80 34603.01' 00:34:20.387 16:51:17 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-24 16:51:11.781016] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:20.387 [2024-07-24 16:51:11.781133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818616 ] 00:34:20.387 Using job config with 4 jobs 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.387 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:20.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:20.388 qat_pci_device_allocate(): Reached maximum numb 16:51:17 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:20.388 er of QAT devices 00:34:20.388 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:20.388 [2024-07-24 16:51:12.019484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.388 [2024-07-24 16:51:12.323761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.388 cpumask for '\''job0'\'' is too big 00:34:20.388 cpumask for '\''job1'\'' is too big 00:34:20.388 cpumask for '\''job2'\'' is too big 00:34:20.388 cpumask for '\''job3'\'' is too big 00:34:20.388 Running I/O for 2 seconds... 00:34:20.388 00:34:20.388 Latency(us) 00:34:20.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:20.388 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc0 : 2.04 11677.78 11.40 0.00 0.00 21900.60 4089.45 34603.01 00:34:20.388 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc1 : 2.04 11666.69 11.39 0.00 0.00 21902.39 4928.31 34603.01 00:34:20.388 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc0 : 2.04 11656.30 11.38 0.00 0.00 21838.07 4063.23 30408.70 00:34:20.388 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc1 : 2.04 11645.32 11.37 0.00 0.00 21836.64 4875.88 30408.70 00:34:20.388 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc0 : 2.05 11634.99 11.36 0.00 0.00 21776.88 4037.02 26214.40 00:34:20.388 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc1 : 2.05 11624.06 11.35 0.00 0.00 21774.12 4875.88 26214.40 00:34:20.388 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc0 : 2.05 11613.70 11.34 0.00 0.00 21710.82 4010.80 22334.67 00:34:20.388 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:20.388 Malloc1 : 2.05 11602.84 11.33 0.00 0.00 21708.94 4823.45 22334.67 00:34:20.388 =================================================================================================================== 00:34:20.388 Total : 93121.70 90.94 0.00 0.00 21806.06 4010.80 34603.01' 00:34:20.388 16:51:17 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:20.388 16:51:17 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:34:20.388 16:51:17 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:34:20.388 16:51:17 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:20.388 16:51:17 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:34:20.388 00:34:20.388 real 0m21.589s 00:34:20.388 user 0m19.670s 00:34:20.388 sys 0m1.726s 00:34:20.388 16:51:17 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:20.388 16:51:17 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:34:20.388 ************************************ 00:34:20.388 END TEST bdevperf_config 00:34:20.388 ************************************ 00:34:20.388 16:51:17 -- spdk/autotest.sh@196 -- # uname -s 00:34:20.388 16:51:17 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:34:20.388 16:51:17 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:20.388 16:51:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:20.388 16:51:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:20.388 16:51:17 -- common/autotest_common.sh@10 -- # set +x 00:34:20.388 ************************************ 00:34:20.388 START TEST reactor_set_interrupt 00:34:20.388 ************************************ 00:34:20.388 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:20.388 * Looking for test storage... 00:34:20.388 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.388 16:51:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:34:20.388 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:20.388 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.388 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.650 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:34:20.650 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:20.650 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:20.650 16:51:17 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:20.650 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:34:20.650 16:51:17 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:20.650 #define SPDK_CONFIG_H 00:34:20.650 #define SPDK_CONFIG_APPS 1 00:34:20.650 #define SPDK_CONFIG_ARCH native 00:34:20.650 #define SPDK_CONFIG_ASAN 1 00:34:20.650 #undef SPDK_CONFIG_AVAHI 00:34:20.650 #undef SPDK_CONFIG_CET 00:34:20.650 #define SPDK_CONFIG_COVERAGE 1 00:34:20.651 #define SPDK_CONFIG_CROSS_PREFIX 00:34:20.651 #define SPDK_CONFIG_CRYPTO 1 00:34:20.651 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:34:20.651 #undef SPDK_CONFIG_CUSTOMOCF 00:34:20.651 #undef SPDK_CONFIG_DAOS 00:34:20.651 #define SPDK_CONFIG_DAOS_DIR 00:34:20.651 #define SPDK_CONFIG_DEBUG 1 00:34:20.651 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:34:20.651 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:20.651 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:20.651 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:20.651 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:20.651 #undef SPDK_CONFIG_DPDK_UADK 00:34:20.651 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:20.651 #define SPDK_CONFIG_EXAMPLES 1 00:34:20.651 #undef SPDK_CONFIG_FC 00:34:20.651 #define SPDK_CONFIG_FC_PATH 00:34:20.651 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:20.651 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:20.651 #undef SPDK_CONFIG_FUSE 00:34:20.651 #undef SPDK_CONFIG_FUZZER 00:34:20.651 #define SPDK_CONFIG_FUZZER_LIB 00:34:20.651 #undef SPDK_CONFIG_GOLANG 00:34:20.651 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:20.651 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:20.651 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:20.651 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:20.651 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:20.651 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:20.651 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:20.651 #define SPDK_CONFIG_IDXD 1 00:34:20.651 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:20.651 #define SPDK_CONFIG_IPSEC_MB 1 00:34:20.651 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:20.651 #define SPDK_CONFIG_ISAL 1 00:34:20.651 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:20.651 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:20.651 #define SPDK_CONFIG_LIBDIR 00:34:20.651 #undef SPDK_CONFIG_LTO 00:34:20.651 #define SPDK_CONFIG_MAX_LCORES 128 00:34:20.651 #define SPDK_CONFIG_NVME_CUSE 1 00:34:20.651 #undef SPDK_CONFIG_OCF 00:34:20.651 #define SPDK_CONFIG_OCF_PATH 00:34:20.651 #define SPDK_CONFIG_OPENSSL_PATH 00:34:20.651 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:20.651 #define SPDK_CONFIG_PGO_DIR 00:34:20.651 #undef SPDK_CONFIG_PGO_USE 00:34:20.651 #define SPDK_CONFIG_PREFIX /usr/local 00:34:20.651 #undef SPDK_CONFIG_RAID5F 00:34:20.651 #undef SPDK_CONFIG_RBD 00:34:20.651 #define SPDK_CONFIG_RDMA 1 00:34:20.651 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:20.651 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:20.651 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:20.651 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:20.651 #define SPDK_CONFIG_SHARED 1 00:34:20.651 #undef SPDK_CONFIG_SMA 00:34:20.651 #define SPDK_CONFIG_TESTS 1 00:34:20.651 #undef SPDK_CONFIG_TSAN 00:34:20.651 #define SPDK_CONFIG_UBLK 1 00:34:20.651 #define SPDK_CONFIG_UBSAN 1 00:34:20.651 #undef SPDK_CONFIG_UNIT_TESTS 00:34:20.651 #undef SPDK_CONFIG_URING 00:34:20.651 #define SPDK_CONFIG_URING_PATH 00:34:20.651 #undef SPDK_CONFIG_URING_ZNS 00:34:20.651 #undef SPDK_CONFIG_USDT 00:34:20.651 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:34:20.651 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:34:20.651 #undef SPDK_CONFIG_VFIO_USER 00:34:20.651 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:20.651 #define SPDK_CONFIG_VHOST 1 00:34:20.651 #define SPDK_CONFIG_VIRTIO 1 00:34:20.651 #undef SPDK_CONFIG_VTUNE 00:34:20.651 #define SPDK_CONFIG_VTUNE_DIR 00:34:20.651 #define SPDK_CONFIG_WERROR 1 00:34:20.651 #define SPDK_CONFIG_WPDK_DIR 00:34:20.651 #undef SPDK_CONFIG_XNVME 00:34:20.651 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:20.651 16:51:17 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:20.651 16:51:17 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.651 16:51:17 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.651 16:51:17 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.651 16:51:17 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:34:20.651 16:51:17 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:34:20.651 16:51:17 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:20.651 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:20.652 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 1819498 ]] 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 1819498 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.ng1ejP 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.ng1ejP/tests/interrupt /tmp/spdk.ng1ejP 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=54804295680 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:34:20.653 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6938009600 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338618368 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9842688 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30870093824 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1060864 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:34:20.654 * Looking for test storage... 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=54804295680 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=9152602112 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.654 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:20.654 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:20.654 16:51:17 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:34:20.655 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.655 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:20.655 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1819539 00:34:20.655 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:20.655 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1819539 /var/tmp/spdk.sock 00:34:20.655 16:51:17 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:20.655 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1819539 ']' 00:34:20.655 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:20.655 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:20.655 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:20.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:20.655 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:20.655 16:51:17 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:20.655 [2024-07-24 16:51:17.476014] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:20.655 [2024-07-24 16:51:17.476136] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819539 ] 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.914 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:20.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:20.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:20.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:20.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:20.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:20.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:20.915 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.915 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:20.915 [2024-07-24 16:51:17.702339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:21.173 [2024-07-24 16:51:17.964786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:21.173 [2024-07-24 16:51:17.964858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:21.173 [2024-07-24 16:51:17.964860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:21.740 [2024-07-24 16:51:18.403079] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:21.741 16:51:18 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:21.741 16:51:18 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:21.741 16:51:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:34:21.741 16:51:18 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:21.999 Malloc0 00:34:21.999 Malloc1 00:34:21.999 Malloc2 00:34:21.999 16:51:18 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:34:21.999 16:51:18 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:21.999 16:51:18 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:21.999 16:51:18 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:22.257 5000+0 records in 00:34:22.257 5000+0 records out 00:34:22.257 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0258794 s, 396 MB/s 00:34:22.257 16:51:18 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:22.257 AIO0 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 1819539 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 1819539 without_thd 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1819539 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:22.516 16:51:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:22.776 spdk_thread ids are 1 on reactor0. 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1819539 0 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1819539 0 idle 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:22.776 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819539 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:01.18 reactor_0' 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819539 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:01.18 reactor_0 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1819539 1 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1819539 1 idle 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:23.034 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819542 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_1' 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819542 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_1 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1819539 2 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1819539 2 idle 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:23.292 16:51:19 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819543 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_2' 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819543 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_2 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:23.292 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:34:23.293 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:34:23.293 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:34:23.550 [2024-07-24 16:51:20.354092] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:23.550 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:23.809 [2024-07-24 16:51:20.581763] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:23.809 [2024-07-24 16:51:20.582065] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:23.809 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:24.066 [2024-07-24 16:51:20.793677] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:24.066 [2024-07-24 16:51:20.793841] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:24.066 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1819539 0 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1819539 0 busy 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:24.067 16:51:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:24.324 16:51:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819539 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:01.58 reactor_0' 00:34:24.324 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819539 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:01.58 reactor_0 00:34:24.324 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:24.324 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:24.324 16:51:20 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1819539 2 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1819539 2 busy 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:24.325 16:51:20 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819543 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:00.35 reactor_2' 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819543 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:00.35 reactor_2 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:24.325 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:24.582 [2024-07-24 16:51:21.381706] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:24.582 [2024-07-24 16:51:21.381831] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1819539 2 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1819539 2 idle 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:24.582 16:51:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819543 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:00.58 reactor_2' 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819543 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:00.58 reactor_2 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:24.839 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:25.097 [2024-07-24 16:51:21.789705] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:25.097 [2024-07-24 16:51:21.789849] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:25.097 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:34:25.097 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:34:25.097 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:34:25.355 [2024-07-24 16:51:21.962321] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1819539 0 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1819539 0 idle 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1819539 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1819539 -w 256 00:34:25.355 16:51:21 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1819539 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:02.39 reactor_0' 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1819539 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:02.39 reactor_0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:34:25.355 16:51:22 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 1819539 00:34:25.355 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1819539 ']' 00:34:25.355 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 1819539 00:34:25.355 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:25.355 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:25.355 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1819539 00:34:25.613 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:25.613 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:25.613 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1819539' 00:34:25.613 killing process with pid 1819539 00:34:25.613 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 1819539 00:34:25.613 16:51:22 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 1819539 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1820678 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:27.543 16:51:24 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1820678 /var/tmp/spdk.sock 00:34:27.543 16:51:24 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 1820678 ']' 00:34:27.543 16:51:24 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.543 16:51:24 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:27.543 16:51:24 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.544 16:51:24 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:27.544 16:51:24 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:27.544 [2024-07-24 16:51:24.331688] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:27.544 [2024-07-24 16:51:24.331807] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820678 ] 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.801 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.801 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.801 [2024-07-24 16:51:24.557485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:28.059 [2024-07-24 16:51:24.850427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:28.059 [2024-07-24 16:51:24.850502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.059 [2024-07-24 16:51:24.850504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:28.623 [2024-07-24 16:51:25.321253] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:28.623 16:51:25 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:28.623 16:51:25 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:28.623 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:34:28.623 16:51:25 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:28.882 Malloc0 00:34:28.882 Malloc1 00:34:28.882 Malloc2 00:34:28.882 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:34:28.882 16:51:25 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:28.882 16:51:25 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:28.882 16:51:25 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:28.882 5000+0 records in 00:34:28.882 5000+0 records out 00:34:28.882 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0267 s, 384 MB/s 00:34:28.882 16:51:25 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:29.139 AIO0 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 1820678 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 1820678 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1820678 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:29.139 16:51:25 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:29.396 16:51:26 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:29.397 16:51:26 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:29.654 spdk_thread ids are 1 on reactor0. 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1820678 0 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1820678 0 idle 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820678 root 20 0 20.1t 202496 34048 S 0.0 0.3 0:01.23 reactor_0' 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820678 root 20 0 20.1t 202496 34048 S 0.0 0.3 0:01.23 reactor_0 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:29.654 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1820678 1 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1820678 1 idle 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:29.655 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820743 root 20 0 20.1t 202496 34048 S 0.0 0.3 0:00.00 reactor_1' 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820743 root 20 0 20.1t 202496 34048 S 0.0 0.3 0:00.00 reactor_1 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1820678 2 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1820678 2 idle 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:29.913 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820744 root 20 0 20.1t 202496 34048 S 0.0 0.3 0:00.00 reactor_2' 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820744 root 20 0 20.1t 202496 34048 S 0.0 0.3 0:00.00 reactor_2 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:34:30.170 16:51:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:30.427 [2024-07-24 16:51:27.063442] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:30.427 [2024-07-24 16:51:27.063688] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:34:30.427 [2024-07-24 16:51:27.063811] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:30.427 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:30.684 [2024-07-24 16:51:27.291946] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:30.684 [2024-07-24 16:51:27.292122] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1820678 0 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1820678 0 busy 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820678 root 20 0 20.1t 205184 34048 R 99.9 0.3 0:01.64 reactor_0' 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820678 root 20 0 20.1t 205184 34048 R 99.9 0.3 0:01.64 reactor_0 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1820678 2 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1820678 2 busy 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:30.684 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820744 root 20 0 20.1t 205184 34048 R 99.9 0.3 0:00.35 reactor_2' 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820744 root 20 0 20.1t 205184 34048 R 99.9 0.3 0:00.35 reactor_2 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:30.941 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:31.219 [2024-07-24 16:51:27.881645] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:31.219 [2024-07-24 16:51:27.881820] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1820678 2 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1820678 2 idle 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:31.219 16:51:27 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820744 root 20 0 20.1t 205184 34048 S 0.0 0.3 0:00.58 reactor_2' 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820744 root 20 0 20.1t 205184 34048 S 0.0 0.3 0:00.58 reactor_2 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:31.219 16:51:28 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:31.477 [2024-07-24 16:51:28.290739] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:31.477 [2024-07-24 16:51:28.290935] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:34:31.477 [2024-07-24 16:51:28.290970] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1820678 0 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1820678 0 idle 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=1820678 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1820678 -w 256 00:34:31.477 16:51:28 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='1820678 root 20 0 20.1t 205184 34048 S 6.7 0.3 0:02.47 reactor_0' 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 1820678 root 20 0 20.1t 205184 34048 S 6.7 0.3 0:02.47 reactor_0 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:34:31.734 16:51:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 1820678 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 1820678 ']' 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 1820678 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1820678 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:31.734 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:31.735 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1820678' 00:34:31.735 killing process with pid 1820678 00:34:31.735 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 1820678 00:34:31.735 16:51:28 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 1820678 00:34:34.260 16:51:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:34:34.260 16:51:30 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:34.260 00:34:34.260 real 0m13.450s 00:34:34.260 user 0m13.579s 00:34:34.260 sys 0m2.329s 00:34:34.260 16:51:30 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:34.260 16:51:30 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:34.260 ************************************ 00:34:34.260 END TEST reactor_set_interrupt 00:34:34.260 ************************************ 00:34:34.260 16:51:30 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:34.260 16:51:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:34.260 16:51:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:34.260 16:51:30 -- common/autotest_common.sh@10 -- # set +x 00:34:34.260 ************************************ 00:34:34.260 START TEST reap_unregistered_poller 00:34:34.260 ************************************ 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:34.260 * Looking for test storage... 00:34:34.260 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.260 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:34:34.260 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:34.260 16:51:30 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:34.261 16:51:30 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:34.261 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:34:34.261 16:51:30 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:34.261 #define SPDK_CONFIG_H 00:34:34.261 #define SPDK_CONFIG_APPS 1 00:34:34.261 #define SPDK_CONFIG_ARCH native 00:34:34.261 #define SPDK_CONFIG_ASAN 1 00:34:34.261 #undef SPDK_CONFIG_AVAHI 00:34:34.261 #undef SPDK_CONFIG_CET 00:34:34.261 #define SPDK_CONFIG_COVERAGE 1 00:34:34.261 #define SPDK_CONFIG_CROSS_PREFIX 00:34:34.261 #define SPDK_CONFIG_CRYPTO 1 00:34:34.261 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:34:34.261 #undef SPDK_CONFIG_CUSTOMOCF 00:34:34.261 #undef SPDK_CONFIG_DAOS 00:34:34.261 #define SPDK_CONFIG_DAOS_DIR 00:34:34.261 #define SPDK_CONFIG_DEBUG 1 00:34:34.261 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:34:34.261 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:34.261 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:34.261 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:34.261 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:34.261 #undef SPDK_CONFIG_DPDK_UADK 00:34:34.261 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:34.261 #define SPDK_CONFIG_EXAMPLES 1 00:34:34.261 #undef SPDK_CONFIG_FC 00:34:34.261 #define SPDK_CONFIG_FC_PATH 00:34:34.261 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:34.261 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:34.261 #undef SPDK_CONFIG_FUSE 00:34:34.261 #undef SPDK_CONFIG_FUZZER 00:34:34.261 #define SPDK_CONFIG_FUZZER_LIB 00:34:34.261 #undef SPDK_CONFIG_GOLANG 00:34:34.261 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:34.261 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:34.261 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:34.261 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:34.261 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:34.261 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:34.261 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:34.261 #define SPDK_CONFIG_IDXD 1 00:34:34.261 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:34.261 #define SPDK_CONFIG_IPSEC_MB 1 00:34:34.261 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:34.261 #define SPDK_CONFIG_ISAL 1 00:34:34.261 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:34.261 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:34.261 #define SPDK_CONFIG_LIBDIR 00:34:34.261 #undef SPDK_CONFIG_LTO 00:34:34.261 #define SPDK_CONFIG_MAX_LCORES 128 00:34:34.261 #define SPDK_CONFIG_NVME_CUSE 1 00:34:34.262 #undef SPDK_CONFIG_OCF 00:34:34.262 #define SPDK_CONFIG_OCF_PATH 00:34:34.262 #define SPDK_CONFIG_OPENSSL_PATH 00:34:34.262 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:34.262 #define SPDK_CONFIG_PGO_DIR 00:34:34.262 #undef SPDK_CONFIG_PGO_USE 00:34:34.262 #define SPDK_CONFIG_PREFIX /usr/local 00:34:34.262 #undef SPDK_CONFIG_RAID5F 00:34:34.262 #undef SPDK_CONFIG_RBD 00:34:34.262 #define SPDK_CONFIG_RDMA 1 00:34:34.262 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:34.262 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:34.262 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:34.262 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:34.262 #define SPDK_CONFIG_SHARED 1 00:34:34.262 #undef SPDK_CONFIG_SMA 00:34:34.262 #define SPDK_CONFIG_TESTS 1 00:34:34.262 #undef SPDK_CONFIG_TSAN 00:34:34.262 #define SPDK_CONFIG_UBLK 1 00:34:34.262 #define SPDK_CONFIG_UBSAN 1 00:34:34.262 #undef SPDK_CONFIG_UNIT_TESTS 00:34:34.262 #undef SPDK_CONFIG_URING 00:34:34.262 #define SPDK_CONFIG_URING_PATH 00:34:34.262 #undef SPDK_CONFIG_URING_ZNS 00:34:34.262 #undef SPDK_CONFIG_USDT 00:34:34.262 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:34:34.262 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:34:34.262 #undef SPDK_CONFIG_VFIO_USER 00:34:34.262 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:34.262 #define SPDK_CONFIG_VHOST 1 00:34:34.262 #define SPDK_CONFIG_VIRTIO 1 00:34:34.262 #undef SPDK_CONFIG_VTUNE 00:34:34.262 #define SPDK_CONFIG_VTUNE_DIR 00:34:34.262 #define SPDK_CONFIG_WERROR 1 00:34:34.262 #define SPDK_CONFIG_WPDK_DIR 00:34:34.262 #undef SPDK_CONFIG_XNVME 00:34:34.262 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:34.262 16:51:30 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.262 16:51:30 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.262 16:51:30 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.262 16:51:30 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.262 16:51:30 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:34:34.262 16:51:30 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:34:34.262 16:51:30 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:34:34.262 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:34.263 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 1821828 ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 1821828 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.0GmmxS 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.0GmmxS/tests/interrupt /tmp/spdk.0GmmxS 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=54804107264 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6938198016 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338622464 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9838592 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30870093824 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1060864 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:34:34.264 * Looking for test storage... 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=54804107264 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=9152790528 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.264 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:34.264 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=1821954 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:34.265 16:51:30 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 1821954 /var/tmp/spdk.sock 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 1821954 ']' 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:34.265 16:51:30 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:34.265 [2024-07-24 16:51:31.039101] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:34.265 [2024-07-24 16:51:31.039229] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821954 ] 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:34.523 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.523 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:34.523 [2024-07-24 16:51:31.262075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:34.781 [2024-07-24 16:51:31.561671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.781 [2024-07-24 16:51:31.565167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:34.781 [2024-07-24 16:51:31.565168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:35.345 [2024-07-24 16:51:32.048387] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:35.345 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:35.345 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:34:35.345 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:35.345 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:34:35.345 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:34:35.345 "name": "app_thread", 00:34:35.345 "id": 1, 00:34:35.345 "active_pollers": [], 00:34:35.345 "timed_pollers": [ 00:34:35.345 { 00:34:35.345 "name": "rpc_subsystem_poll_servers", 00:34:35.345 "id": 1, 00:34:35.345 "state": "waiting", 00:34:35.345 "run_count": 0, 00:34:35.345 "busy_count": 0, 00:34:35.345 "period_ticks": 10000000 00:34:35.345 } 00:34:35.345 ], 00:34:35.345 "paused_pollers": [] 00:34:35.345 }' 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:34:35.345 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:34:35.601 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:34:35.601 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:34:35.601 16:51:32 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:34:35.601 16:51:32 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:35.601 16:51:32 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:35.601 5000+0 records in 00:34:35.601 5000+0 records out 00:34:35.601 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0252105 s, 406 MB/s 00:34:35.601 16:51:32 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:35.859 AIO0 00:34:35.859 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:36.117 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:34:36.117 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:34:36.117 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:34:36.118 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:36.118 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:36.118 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:34:36.118 "name": "app_thread", 00:34:36.118 "id": 1, 00:34:36.118 "active_pollers": [], 00:34:36.118 "timed_pollers": [ 00:34:36.118 { 00:34:36.118 "name": "rpc_subsystem_poll_servers", 00:34:36.118 "id": 1, 00:34:36.118 "state": "waiting", 00:34:36.118 "run_count": 0, 00:34:36.118 "busy_count": 0, 00:34:36.118 "period_ticks": 10000000 00:34:36.118 } 00:34:36.118 ], 00:34:36.118 "paused_pollers": [] 00:34:36.118 }' 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:34:36.118 16:51:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 1821954 00:34:36.118 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 1821954 ']' 00:34:36.376 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 1821954 00:34:36.376 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:34:36.376 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:36.376 16:51:32 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1821954 00:34:36.376 16:51:33 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:36.376 16:51:33 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:36.376 16:51:33 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1821954' 00:34:36.376 killing process with pid 1821954 00:34:36.376 16:51:33 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 1821954 00:34:36.376 16:51:33 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 1821954 00:34:38.275 16:51:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:34:38.275 16:51:34 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:38.275 00:34:38.275 real 0m4.049s 00:34:38.275 user 0m3.590s 00:34:38.275 sys 0m0.780s 00:34:38.275 16:51:34 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:38.275 16:51:34 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:38.275 ************************************ 00:34:38.275 END TEST reap_unregistered_poller 00:34:38.275 ************************************ 00:34:38.275 16:51:34 -- spdk/autotest.sh@202 -- # uname -s 00:34:38.275 16:51:34 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:34:38.275 16:51:34 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:34:38.275 16:51:34 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:34:38.275 16:51:34 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@264 -- # timing_exit lib 00:34:38.275 16:51:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:38.275 16:51:34 -- common/autotest_common.sh@10 -- # set +x 00:34:38.275 16:51:34 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:34:38.275 16:51:34 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:34:38.275 16:51:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:38.275 16:51:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:38.275 16:51:34 -- common/autotest_common.sh@10 -- # set +x 00:34:38.275 ************************************ 00:34:38.275 START TEST compress_compdev 00:34:38.275 ************************************ 00:34:38.275 16:51:34 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:34:38.275 * Looking for test storage... 00:34:38.275 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:38.275 16:51:34 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:38.275 16:51:34 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:38.275 16:51:34 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:38.275 16:51:34 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.275 16:51:34 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.275 16:51:34 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.275 16:51:34 compress_compdev -- paths/export.sh@5 -- # export PATH 00:34:38.275 16:51:34 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:38.275 16:51:34 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:38.275 16:51:34 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1822746 00:34:38.276 16:51:34 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:38.276 16:51:34 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1822746 00:34:38.276 16:51:34 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1822746 ']' 00:34:38.276 16:51:34 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:34:38.276 16:51:34 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.276 16:51:34 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:38.276 16:51:34 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.276 16:51:34 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:38.276 16:51:34 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:38.276 [2024-07-24 16:51:35.097419] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:38.276 [2024-07-24 16:51:35.097547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822746 ] 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:38.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:38.534 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:38.534 [2024-07-24 16:51:35.311703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:38.792 [2024-07-24 16:51:35.594536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.792 [2024-07-24 16:51:35.594539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.169 [2024-07-24 16:51:36.987682] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:41.103 16:51:37 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:41.103 16:51:37 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:34:41.103 16:51:37 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:34:41.103 16:51:37 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:41.103 16:51:37 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:44.438 [2024-07-24 16:51:40.777480] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:34:44.438 16:51:40 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:44.438 16:51:40 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:34:44.438 16:51:40 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:44.438 16:51:40 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:34:44.438 16:51:40 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:44.438 16:51:40 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:44.438 16:51:40 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:44.438 16:51:41 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:44.438 [ 00:34:44.438 { 00:34:44.438 "name": "Nvme0n1", 00:34:44.438 "aliases": [ 00:34:44.438 "57be06e2-2dce-4f03-b7c6-93e3487a835f" 00:34:44.438 ], 00:34:44.438 "product_name": "NVMe disk", 00:34:44.438 "block_size": 512, 00:34:44.438 "num_blocks": 3907029168, 00:34:44.438 "uuid": "57be06e2-2dce-4f03-b7c6-93e3487a835f", 00:34:44.438 "assigned_rate_limits": { 00:34:44.438 "rw_ios_per_sec": 0, 00:34:44.438 "rw_mbytes_per_sec": 0, 00:34:44.438 "r_mbytes_per_sec": 0, 00:34:44.438 "w_mbytes_per_sec": 0 00:34:44.438 }, 00:34:44.438 "claimed": false, 00:34:44.438 "zoned": false, 00:34:44.438 "supported_io_types": { 00:34:44.438 "read": true, 00:34:44.438 "write": true, 00:34:44.438 "unmap": true, 00:34:44.438 "flush": true, 00:34:44.438 "reset": true, 00:34:44.438 "nvme_admin": true, 00:34:44.438 "nvme_io": true, 00:34:44.438 "nvme_io_md": false, 00:34:44.438 "write_zeroes": true, 00:34:44.438 "zcopy": false, 00:34:44.438 "get_zone_info": false, 00:34:44.438 "zone_management": false, 00:34:44.438 "zone_append": false, 00:34:44.438 "compare": false, 00:34:44.438 "compare_and_write": false, 00:34:44.438 "abort": true, 00:34:44.438 "seek_hole": false, 00:34:44.438 "seek_data": false, 00:34:44.438 "copy": false, 00:34:44.438 "nvme_iov_md": false 00:34:44.438 }, 00:34:44.438 "driver_specific": { 00:34:44.438 "nvme": [ 00:34:44.438 { 00:34:44.438 "pci_address": "0000:d8:00.0", 00:34:44.438 "trid": { 00:34:44.438 "trtype": "PCIe", 00:34:44.438 "traddr": "0000:d8:00.0" 00:34:44.438 }, 00:34:44.438 "ctrlr_data": { 00:34:44.438 "cntlid": 0, 00:34:44.438 "vendor_id": "0x8086", 00:34:44.438 "model_number": "INTEL SSDPE2KX020T8", 00:34:44.438 "serial_number": "BTLJ125505KA2P0BGN", 00:34:44.438 "firmware_revision": "VDV10170", 00:34:44.438 "oacs": { 00:34:44.438 "security": 0, 00:34:44.438 "format": 1, 00:34:44.438 "firmware": 1, 00:34:44.438 "ns_manage": 1 00:34:44.438 }, 00:34:44.438 "multi_ctrlr": false, 00:34:44.438 "ana_reporting": false 00:34:44.438 }, 00:34:44.438 "vs": { 00:34:44.438 "nvme_version": "1.2" 00:34:44.438 }, 00:34:44.438 "ns_data": { 00:34:44.438 "id": 1, 00:34:44.438 "can_share": false 00:34:44.438 } 00:34:44.438 } 00:34:44.438 ], 00:34:44.438 "mp_policy": "active_passive" 00:34:44.438 } 00:34:44.438 } 00:34:44.438 ] 00:34:44.438 16:51:41 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:34:44.438 16:51:41 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:44.696 [2024-07-24 16:51:41.502078] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:34:45.631 d8957c9b-42cc-405e-95ef-1334d5aeb901 00:34:45.890 16:51:42 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:45.890 22b993d4-7175-4c5c-adab-c2c5f77a4d9a 00:34:45.890 16:51:42 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:45.890 16:51:42 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:34:45.890 16:51:42 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:45.890 16:51:42 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:34:45.890 16:51:42 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:45.890 16:51:42 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:45.890 16:51:42 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:46.148 16:51:42 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:46.406 [ 00:34:46.406 { 00:34:46.406 "name": "22b993d4-7175-4c5c-adab-c2c5f77a4d9a", 00:34:46.406 "aliases": [ 00:34:46.406 "lvs0/lv0" 00:34:46.406 ], 00:34:46.406 "product_name": "Logical Volume", 00:34:46.406 "block_size": 512, 00:34:46.406 "num_blocks": 204800, 00:34:46.406 "uuid": "22b993d4-7175-4c5c-adab-c2c5f77a4d9a", 00:34:46.406 "assigned_rate_limits": { 00:34:46.406 "rw_ios_per_sec": 0, 00:34:46.406 "rw_mbytes_per_sec": 0, 00:34:46.406 "r_mbytes_per_sec": 0, 00:34:46.406 "w_mbytes_per_sec": 0 00:34:46.406 }, 00:34:46.406 "claimed": false, 00:34:46.406 "zoned": false, 00:34:46.406 "supported_io_types": { 00:34:46.406 "read": true, 00:34:46.406 "write": true, 00:34:46.406 "unmap": true, 00:34:46.406 "flush": false, 00:34:46.407 "reset": true, 00:34:46.407 "nvme_admin": false, 00:34:46.407 "nvme_io": false, 00:34:46.407 "nvme_io_md": false, 00:34:46.407 "write_zeroes": true, 00:34:46.407 "zcopy": false, 00:34:46.407 "get_zone_info": false, 00:34:46.407 "zone_management": false, 00:34:46.407 "zone_append": false, 00:34:46.407 "compare": false, 00:34:46.407 "compare_and_write": false, 00:34:46.407 "abort": false, 00:34:46.407 "seek_hole": true, 00:34:46.407 "seek_data": true, 00:34:46.407 "copy": false, 00:34:46.407 "nvme_iov_md": false 00:34:46.407 }, 00:34:46.407 "driver_specific": { 00:34:46.407 "lvol": { 00:34:46.407 "lvol_store_uuid": "d8957c9b-42cc-405e-95ef-1334d5aeb901", 00:34:46.407 "base_bdev": "Nvme0n1", 00:34:46.407 "thin_provision": true, 00:34:46.407 "num_allocated_clusters": 0, 00:34:46.407 "snapshot": false, 00:34:46.407 "clone": false, 00:34:46.407 "esnap_clone": false 00:34:46.407 } 00:34:46.407 } 00:34:46.407 } 00:34:46.407 ] 00:34:46.407 16:51:43 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:34:46.407 16:51:43 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:46.407 16:51:43 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:46.673 [2024-07-24 16:51:43.395172] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:46.673 COMP_lvs0/lv0 00:34:46.673 16:51:43 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:46.673 16:51:43 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:34:46.673 16:51:43 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:46.673 16:51:43 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:34:46.673 16:51:43 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:46.673 16:51:43 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:46.673 16:51:43 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:46.933 16:51:43 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:47.191 [ 00:34:47.191 { 00:34:47.191 "name": "COMP_lvs0/lv0", 00:34:47.191 "aliases": [ 00:34:47.191 "de76c3e9-69c3-54e3-8b32-452247aa7cde" 00:34:47.191 ], 00:34:47.191 "product_name": "compress", 00:34:47.191 "block_size": 512, 00:34:47.191 "num_blocks": 200704, 00:34:47.191 "uuid": "de76c3e9-69c3-54e3-8b32-452247aa7cde", 00:34:47.191 "assigned_rate_limits": { 00:34:47.191 "rw_ios_per_sec": 0, 00:34:47.191 "rw_mbytes_per_sec": 0, 00:34:47.191 "r_mbytes_per_sec": 0, 00:34:47.191 "w_mbytes_per_sec": 0 00:34:47.191 }, 00:34:47.191 "claimed": false, 00:34:47.192 "zoned": false, 00:34:47.192 "supported_io_types": { 00:34:47.192 "read": true, 00:34:47.192 "write": true, 00:34:47.192 "unmap": false, 00:34:47.192 "flush": false, 00:34:47.192 "reset": false, 00:34:47.192 "nvme_admin": false, 00:34:47.192 "nvme_io": false, 00:34:47.192 "nvme_io_md": false, 00:34:47.192 "write_zeroes": true, 00:34:47.192 "zcopy": false, 00:34:47.192 "get_zone_info": false, 00:34:47.192 "zone_management": false, 00:34:47.192 "zone_append": false, 00:34:47.192 "compare": false, 00:34:47.192 "compare_and_write": false, 00:34:47.192 "abort": false, 00:34:47.192 "seek_hole": false, 00:34:47.192 "seek_data": false, 00:34:47.192 "copy": false, 00:34:47.192 "nvme_iov_md": false 00:34:47.192 }, 00:34:47.192 "driver_specific": { 00:34:47.192 "compress": { 00:34:47.192 "name": "COMP_lvs0/lv0", 00:34:47.192 "base_bdev_name": "22b993d4-7175-4c5c-adab-c2c5f77a4d9a", 00:34:47.192 "pm_path": "/tmp/pmem/9c33b7ce-1e56-475b-942b-22cc75b54837" 00:34:47.192 } 00:34:47.192 } 00:34:47.192 } 00:34:47.192 ] 00:34:47.192 16:51:43 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:34:47.192 16:51:43 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:47.192 [2024-07-24 16:51:43.976871] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:34:47.192 [2024-07-24 16:51:43.980192] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:34:47.192 Running I/O for 3 seconds... 00:34:50.474 00:34:50.474 Latency(us) 00:34:50.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.474 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:34:50.474 Verification LBA range: start 0x0 length 0x3100 00:34:50.474 COMP_lvs0/lv0 : 3.01 3866.13 15.10 0.00 0.00 8221.72 136.81 13841.20 00:34:50.474 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:34:50.474 Verification LBA range: start 0x3100 length 0x3100 00:34:50.474 COMP_lvs0/lv0 : 3.01 3998.68 15.62 0.00 0.00 7957.16 126.16 14260.63 00:34:50.474 =================================================================================================================== 00:34:50.474 Total : 7864.82 30.72 0.00 0.00 8087.16 126.16 14260.63 00:34:50.474 0 00:34:50.474 16:51:47 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:34:50.474 16:51:47 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:34:50.474 16:51:47 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:34:50.742 16:51:47 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:34:50.742 16:51:47 compress_compdev -- compress/compress.sh@78 -- # killprocess 1822746 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1822746 ']' 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1822746 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1822746 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1822746' 00:34:50.742 killing process with pid 1822746 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@969 -- # kill 1822746 00:34:50.742 Received shutdown signal, test time was about 3.000000 seconds 00:34:50.742 00:34:50.742 Latency(us) 00:34:50.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.742 =================================================================================================================== 00:34:50.742 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.742 16:51:47 compress_compdev -- common/autotest_common.sh@974 -- # wait 1822746 00:34:54.927 16:51:50 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:34:54.927 16:51:50 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:54.927 16:51:50 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1825385 00:34:54.927 16:51:50 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:54.927 16:51:50 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:34:54.927 16:51:50 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1825385 00:34:54.927 16:51:50 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1825385 ']' 00:34:54.927 16:51:50 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.927 16:51:50 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:54.927 16:51:50 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.927 16:51:50 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:54.927 16:51:50 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:54.927 [2024-07-24 16:51:51.069707] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:34:54.927 [2024-07-24 16:51:51.069838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825385 ] 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:54.927 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:54.927 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:54.927 [2024-07-24 16:51:51.285166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:54.927 [2024-07-24 16:51:51.572184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.927 [2024-07-24 16:51:51.572189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:56.301 [2024-07-24 16:51:52.985051] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:56.874 16:51:53 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.874 16:51:53 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:34:56.874 16:51:53 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:34:56.874 16:51:53 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:56.874 16:51:53 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:00.159 [2024-07-24 16:51:56.775581] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:00.159 16:51:56 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:00.159 16:51:56 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:00.159 16:51:56 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:00.159 16:51:56 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:00.159 16:51:56 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:00.159 16:51:56 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:00.159 16:51:56 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:00.418 16:51:57 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:00.418 [ 00:35:00.418 { 00:35:00.418 "name": "Nvme0n1", 00:35:00.418 "aliases": [ 00:35:00.418 "c650bd7c-3a63-4156-942a-0552c07700cf" 00:35:00.418 ], 00:35:00.418 "product_name": "NVMe disk", 00:35:00.418 "block_size": 512, 00:35:00.418 "num_blocks": 3907029168, 00:35:00.418 "uuid": "c650bd7c-3a63-4156-942a-0552c07700cf", 00:35:00.418 "assigned_rate_limits": { 00:35:00.418 "rw_ios_per_sec": 0, 00:35:00.418 "rw_mbytes_per_sec": 0, 00:35:00.418 "r_mbytes_per_sec": 0, 00:35:00.418 "w_mbytes_per_sec": 0 00:35:00.418 }, 00:35:00.418 "claimed": false, 00:35:00.418 "zoned": false, 00:35:00.418 "supported_io_types": { 00:35:00.418 "read": true, 00:35:00.418 "write": true, 00:35:00.418 "unmap": true, 00:35:00.418 "flush": true, 00:35:00.418 "reset": true, 00:35:00.418 "nvme_admin": true, 00:35:00.418 "nvme_io": true, 00:35:00.418 "nvme_io_md": false, 00:35:00.418 "write_zeroes": true, 00:35:00.418 "zcopy": false, 00:35:00.418 "get_zone_info": false, 00:35:00.418 "zone_management": false, 00:35:00.418 "zone_append": false, 00:35:00.418 "compare": false, 00:35:00.418 "compare_and_write": false, 00:35:00.418 "abort": true, 00:35:00.418 "seek_hole": false, 00:35:00.418 "seek_data": false, 00:35:00.418 "copy": false, 00:35:00.418 "nvme_iov_md": false 00:35:00.418 }, 00:35:00.418 "driver_specific": { 00:35:00.418 "nvme": [ 00:35:00.418 { 00:35:00.418 "pci_address": "0000:d8:00.0", 00:35:00.418 "trid": { 00:35:00.418 "trtype": "PCIe", 00:35:00.419 "traddr": "0000:d8:00.0" 00:35:00.419 }, 00:35:00.419 "ctrlr_data": { 00:35:00.419 "cntlid": 0, 00:35:00.419 "vendor_id": "0x8086", 00:35:00.419 "model_number": "INTEL SSDPE2KX020T8", 00:35:00.419 "serial_number": "BTLJ125505KA2P0BGN", 00:35:00.419 "firmware_revision": "VDV10170", 00:35:00.419 "oacs": { 00:35:00.419 "security": 0, 00:35:00.419 "format": 1, 00:35:00.419 "firmware": 1, 00:35:00.419 "ns_manage": 1 00:35:00.419 }, 00:35:00.419 "multi_ctrlr": false, 00:35:00.419 "ana_reporting": false 00:35:00.419 }, 00:35:00.419 "vs": { 00:35:00.419 "nvme_version": "1.2" 00:35:00.419 }, 00:35:00.419 "ns_data": { 00:35:00.419 "id": 1, 00:35:00.419 "can_share": false 00:35:00.419 } 00:35:00.419 } 00:35:00.419 ], 00:35:00.419 "mp_policy": "active_passive" 00:35:00.419 } 00:35:00.419 } 00:35:00.419 ] 00:35:00.677 16:51:57 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:00.677 16:51:57 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:00.677 [2024-07-24 16:51:57.498826] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:02.053 20de2ac0-124e-4dbd-9bcd-0c6e6d45a22b 00:35:02.053 16:51:58 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:02.053 eb886d96-e41a-4af7-971e-6491826a6db0 00:35:02.053 16:51:58 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:02.053 16:51:58 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:02.053 16:51:58 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:02.053 16:51:58 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:02.053 16:51:58 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:02.053 16:51:58 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:02.053 16:51:58 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:02.312 16:51:59 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:02.571 [ 00:35:02.571 { 00:35:02.571 "name": "eb886d96-e41a-4af7-971e-6491826a6db0", 00:35:02.571 "aliases": [ 00:35:02.571 "lvs0/lv0" 00:35:02.571 ], 00:35:02.571 "product_name": "Logical Volume", 00:35:02.571 "block_size": 512, 00:35:02.571 "num_blocks": 204800, 00:35:02.571 "uuid": "eb886d96-e41a-4af7-971e-6491826a6db0", 00:35:02.571 "assigned_rate_limits": { 00:35:02.571 "rw_ios_per_sec": 0, 00:35:02.571 "rw_mbytes_per_sec": 0, 00:35:02.571 "r_mbytes_per_sec": 0, 00:35:02.571 "w_mbytes_per_sec": 0 00:35:02.571 }, 00:35:02.571 "claimed": false, 00:35:02.571 "zoned": false, 00:35:02.571 "supported_io_types": { 00:35:02.571 "read": true, 00:35:02.571 "write": true, 00:35:02.571 "unmap": true, 00:35:02.571 "flush": false, 00:35:02.571 "reset": true, 00:35:02.571 "nvme_admin": false, 00:35:02.571 "nvme_io": false, 00:35:02.571 "nvme_io_md": false, 00:35:02.571 "write_zeroes": true, 00:35:02.571 "zcopy": false, 00:35:02.571 "get_zone_info": false, 00:35:02.571 "zone_management": false, 00:35:02.571 "zone_append": false, 00:35:02.571 "compare": false, 00:35:02.571 "compare_and_write": false, 00:35:02.571 "abort": false, 00:35:02.571 "seek_hole": true, 00:35:02.571 "seek_data": true, 00:35:02.571 "copy": false, 00:35:02.571 "nvme_iov_md": false 00:35:02.571 }, 00:35:02.571 "driver_specific": { 00:35:02.571 "lvol": { 00:35:02.571 "lvol_store_uuid": "20de2ac0-124e-4dbd-9bcd-0c6e6d45a22b", 00:35:02.571 "base_bdev": "Nvme0n1", 00:35:02.571 "thin_provision": true, 00:35:02.571 "num_allocated_clusters": 0, 00:35:02.571 "snapshot": false, 00:35:02.571 "clone": false, 00:35:02.571 "esnap_clone": false 00:35:02.571 } 00:35:02.571 } 00:35:02.571 } 00:35:02.571 ] 00:35:02.571 16:51:59 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:02.571 16:51:59 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:35:02.571 16:51:59 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:35:02.829 [2024-07-24 16:51:59.490101] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:02.829 COMP_lvs0/lv0 00:35:02.829 16:51:59 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:02.829 16:51:59 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:02.829 16:51:59 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:02.829 16:51:59 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:02.829 16:51:59 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:02.829 16:51:59 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:02.829 16:51:59 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:03.088 16:51:59 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:03.088 [ 00:35:03.088 { 00:35:03.088 "name": "COMP_lvs0/lv0", 00:35:03.088 "aliases": [ 00:35:03.088 "43431c45-be89-5628-b13b-7285a0a740bc" 00:35:03.088 ], 00:35:03.088 "product_name": "compress", 00:35:03.088 "block_size": 512, 00:35:03.088 "num_blocks": 200704, 00:35:03.088 "uuid": "43431c45-be89-5628-b13b-7285a0a740bc", 00:35:03.088 "assigned_rate_limits": { 00:35:03.088 "rw_ios_per_sec": 0, 00:35:03.088 "rw_mbytes_per_sec": 0, 00:35:03.088 "r_mbytes_per_sec": 0, 00:35:03.088 "w_mbytes_per_sec": 0 00:35:03.088 }, 00:35:03.088 "claimed": false, 00:35:03.088 "zoned": false, 00:35:03.088 "supported_io_types": { 00:35:03.088 "read": true, 00:35:03.088 "write": true, 00:35:03.088 "unmap": false, 00:35:03.088 "flush": false, 00:35:03.088 "reset": false, 00:35:03.088 "nvme_admin": false, 00:35:03.088 "nvme_io": false, 00:35:03.088 "nvme_io_md": false, 00:35:03.088 "write_zeroes": true, 00:35:03.088 "zcopy": false, 00:35:03.088 "get_zone_info": false, 00:35:03.088 "zone_management": false, 00:35:03.088 "zone_append": false, 00:35:03.088 "compare": false, 00:35:03.088 "compare_and_write": false, 00:35:03.088 "abort": false, 00:35:03.088 "seek_hole": false, 00:35:03.088 "seek_data": false, 00:35:03.088 "copy": false, 00:35:03.088 "nvme_iov_md": false 00:35:03.088 }, 00:35:03.088 "driver_specific": { 00:35:03.088 "compress": { 00:35:03.088 "name": "COMP_lvs0/lv0", 00:35:03.088 "base_bdev_name": "eb886d96-e41a-4af7-971e-6491826a6db0", 00:35:03.088 "pm_path": "/tmp/pmem/32ff2590-8e1d-492c-83a0-ce29401c9df6" 00:35:03.088 } 00:35:03.088 } 00:35:03.088 } 00:35:03.088 ] 00:35:03.347 16:51:59 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:03.347 16:51:59 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:03.347 [2024-07-24 16:52:00.071109] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:03.347 [2024-07-24 16:52:00.074268] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:35:03.347 Running I/O for 3 seconds... 00:35:06.741 00:35:06.741 Latency(us) 00:35:06.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.741 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:06.741 Verification LBA range: start 0x0 length 0x3100 00:35:06.741 COMP_lvs0/lv0 : 3.01 3852.46 15.05 0.00 0.00 8255.62 134.35 13526.63 00:35:06.741 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:06.741 Verification LBA range: start 0x3100 length 0x3100 00:35:06.741 COMP_lvs0/lv0 : 3.01 4005.75 15.65 0.00 0.00 7949.41 125.34 13159.63 00:35:06.741 =================================================================================================================== 00:35:06.741 Total : 7858.21 30.70 0.00 0.00 8099.53 125.34 13526.63 00:35:06.741 0 00:35:06.741 16:52:03 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:06.741 16:52:03 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:06.741 16:52:03 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:06.999 16:52:03 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:06.999 16:52:03 compress_compdev -- compress/compress.sh@78 -- # killprocess 1825385 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1825385 ']' 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1825385 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825385 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825385' 00:35:06.999 killing process with pid 1825385 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@969 -- # kill 1825385 00:35:06.999 Received shutdown signal, test time was about 3.000000 seconds 00:35:06.999 00:35:06.999 Latency(us) 00:35:06.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:06.999 =================================================================================================================== 00:35:06.999 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:06.999 16:52:03 compress_compdev -- common/autotest_common.sh@974 -- # wait 1825385 00:35:11.187 16:52:07 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:35:11.187 16:52:07 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:11.187 16:52:07 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1827984 00:35:11.187 16:52:07 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:11.187 16:52:07 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:11.187 16:52:07 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1827984 00:35:11.187 16:52:07 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1827984 ']' 00:35:11.187 16:52:07 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:11.187 16:52:07 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:11.187 16:52:07 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:11.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:11.187 16:52:07 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:11.187 16:52:07 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:11.187 [2024-07-24 16:52:07.310940] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:35:11.187 [2024-07-24 16:52:07.311068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827984 ] 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:11.187 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:11.187 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:11.187 [2024-07-24 16:52:07.526780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:11.187 [2024-07-24 16:52:07.817742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:11.187 [2024-07-24 16:52:07.817746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:12.563 [2024-07-24 16:52:09.151012] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:13.129 16:52:09 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:13.129 16:52:09 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:13.129 16:52:09 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:35:13.129 16:52:09 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:13.129 16:52:09 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:16.412 [2024-07-24 16:52:13.063479] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:16.412 16:52:13 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:16.412 16:52:13 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:16.412 16:52:13 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:16.412 16:52:13 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:16.412 16:52:13 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:16.412 16:52:13 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:16.412 16:52:13 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:16.671 16:52:13 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:16.929 [ 00:35:16.929 { 00:35:16.929 "name": "Nvme0n1", 00:35:16.929 "aliases": [ 00:35:16.929 "5145d35b-30bb-4c90-87f2-8118de64e278" 00:35:16.929 ], 00:35:16.929 "product_name": "NVMe disk", 00:35:16.929 "block_size": 512, 00:35:16.929 "num_blocks": 3907029168, 00:35:16.929 "uuid": "5145d35b-30bb-4c90-87f2-8118de64e278", 00:35:16.929 "assigned_rate_limits": { 00:35:16.929 "rw_ios_per_sec": 0, 00:35:16.929 "rw_mbytes_per_sec": 0, 00:35:16.929 "r_mbytes_per_sec": 0, 00:35:16.929 "w_mbytes_per_sec": 0 00:35:16.929 }, 00:35:16.929 "claimed": false, 00:35:16.929 "zoned": false, 00:35:16.929 "supported_io_types": { 00:35:16.929 "read": true, 00:35:16.929 "write": true, 00:35:16.929 "unmap": true, 00:35:16.929 "flush": true, 00:35:16.929 "reset": true, 00:35:16.929 "nvme_admin": true, 00:35:16.929 "nvme_io": true, 00:35:16.929 "nvme_io_md": false, 00:35:16.929 "write_zeroes": true, 00:35:16.929 "zcopy": false, 00:35:16.929 "get_zone_info": false, 00:35:16.929 "zone_management": false, 00:35:16.929 "zone_append": false, 00:35:16.929 "compare": false, 00:35:16.929 "compare_and_write": false, 00:35:16.929 "abort": true, 00:35:16.929 "seek_hole": false, 00:35:16.929 "seek_data": false, 00:35:16.929 "copy": false, 00:35:16.929 "nvme_iov_md": false 00:35:16.929 }, 00:35:16.929 "driver_specific": { 00:35:16.929 "nvme": [ 00:35:16.929 { 00:35:16.929 "pci_address": "0000:d8:00.0", 00:35:16.929 "trid": { 00:35:16.929 "trtype": "PCIe", 00:35:16.929 "traddr": "0000:d8:00.0" 00:35:16.929 }, 00:35:16.929 "ctrlr_data": { 00:35:16.929 "cntlid": 0, 00:35:16.929 "vendor_id": "0x8086", 00:35:16.929 "model_number": "INTEL SSDPE2KX020T8", 00:35:16.929 "serial_number": "BTLJ125505KA2P0BGN", 00:35:16.929 "firmware_revision": "VDV10170", 00:35:16.929 "oacs": { 00:35:16.929 "security": 0, 00:35:16.929 "format": 1, 00:35:16.929 "firmware": 1, 00:35:16.929 "ns_manage": 1 00:35:16.929 }, 00:35:16.929 "multi_ctrlr": false, 00:35:16.929 "ana_reporting": false 00:35:16.929 }, 00:35:16.929 "vs": { 00:35:16.929 "nvme_version": "1.2" 00:35:16.929 }, 00:35:16.929 "ns_data": { 00:35:16.929 "id": 1, 00:35:16.929 "can_share": false 00:35:16.929 } 00:35:16.929 } 00:35:16.929 ], 00:35:16.929 "mp_policy": "active_passive" 00:35:16.929 } 00:35:16.929 } 00:35:16.929 ] 00:35:16.929 16:52:13 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:16.929 16:52:13 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:17.187 [2024-07-24 16:52:13.799103] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:18.122 e600f36c-bd0a-4179-a0e2-e66798b04eaa 00:35:18.122 16:52:14 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:18.380 4a7d460d-43a5-467d-8510-47dba6db05d3 00:35:18.380 16:52:15 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:18.380 16:52:15 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:18.380 16:52:15 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:18.380 16:52:15 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:18.380 16:52:15 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:18.380 16:52:15 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:18.380 16:52:15 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:18.638 16:52:15 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:18.638 [ 00:35:18.638 { 00:35:18.638 "name": "4a7d460d-43a5-467d-8510-47dba6db05d3", 00:35:18.638 "aliases": [ 00:35:18.638 "lvs0/lv0" 00:35:18.638 ], 00:35:18.638 "product_name": "Logical Volume", 00:35:18.638 "block_size": 512, 00:35:18.638 "num_blocks": 204800, 00:35:18.638 "uuid": "4a7d460d-43a5-467d-8510-47dba6db05d3", 00:35:18.638 "assigned_rate_limits": { 00:35:18.638 "rw_ios_per_sec": 0, 00:35:18.638 "rw_mbytes_per_sec": 0, 00:35:18.638 "r_mbytes_per_sec": 0, 00:35:18.638 "w_mbytes_per_sec": 0 00:35:18.638 }, 00:35:18.638 "claimed": false, 00:35:18.638 "zoned": false, 00:35:18.638 "supported_io_types": { 00:35:18.638 "read": true, 00:35:18.638 "write": true, 00:35:18.638 "unmap": true, 00:35:18.638 "flush": false, 00:35:18.638 "reset": true, 00:35:18.638 "nvme_admin": false, 00:35:18.638 "nvme_io": false, 00:35:18.638 "nvme_io_md": false, 00:35:18.638 "write_zeroes": true, 00:35:18.638 "zcopy": false, 00:35:18.638 "get_zone_info": false, 00:35:18.638 "zone_management": false, 00:35:18.638 "zone_append": false, 00:35:18.638 "compare": false, 00:35:18.638 "compare_and_write": false, 00:35:18.638 "abort": false, 00:35:18.638 "seek_hole": true, 00:35:18.638 "seek_data": true, 00:35:18.638 "copy": false, 00:35:18.638 "nvme_iov_md": false 00:35:18.638 }, 00:35:18.638 "driver_specific": { 00:35:18.638 "lvol": { 00:35:18.638 "lvol_store_uuid": "e600f36c-bd0a-4179-a0e2-e66798b04eaa", 00:35:18.638 "base_bdev": "Nvme0n1", 00:35:18.638 "thin_provision": true, 00:35:18.638 "num_allocated_clusters": 0, 00:35:18.638 "snapshot": false, 00:35:18.638 "clone": false, 00:35:18.638 "esnap_clone": false 00:35:18.638 } 00:35:18.638 } 00:35:18.638 } 00:35:18.638 ] 00:35:18.638 16:52:15 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:18.638 16:52:15 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:35:18.638 16:52:15 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:35:18.896 [2024-07-24 16:52:15.713171] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:18.896 COMP_lvs0/lv0 00:35:18.896 16:52:15 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:18.896 16:52:15 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:18.896 16:52:15 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:18.896 16:52:15 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:18.896 16:52:15 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:18.896 16:52:15 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:18.896 16:52:15 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:19.154 16:52:15 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:19.412 [ 00:35:19.412 { 00:35:19.412 "name": "COMP_lvs0/lv0", 00:35:19.412 "aliases": [ 00:35:19.412 "3d561a68-78e7-533a-8482-a40caf402190" 00:35:19.412 ], 00:35:19.412 "product_name": "compress", 00:35:19.412 "block_size": 4096, 00:35:19.412 "num_blocks": 25088, 00:35:19.412 "uuid": "3d561a68-78e7-533a-8482-a40caf402190", 00:35:19.412 "assigned_rate_limits": { 00:35:19.412 "rw_ios_per_sec": 0, 00:35:19.412 "rw_mbytes_per_sec": 0, 00:35:19.412 "r_mbytes_per_sec": 0, 00:35:19.412 "w_mbytes_per_sec": 0 00:35:19.412 }, 00:35:19.412 "claimed": false, 00:35:19.412 "zoned": false, 00:35:19.412 "supported_io_types": { 00:35:19.412 "read": true, 00:35:19.412 "write": true, 00:35:19.412 "unmap": false, 00:35:19.412 "flush": false, 00:35:19.412 "reset": false, 00:35:19.412 "nvme_admin": false, 00:35:19.412 "nvme_io": false, 00:35:19.412 "nvme_io_md": false, 00:35:19.412 "write_zeroes": true, 00:35:19.412 "zcopy": false, 00:35:19.412 "get_zone_info": false, 00:35:19.413 "zone_management": false, 00:35:19.413 "zone_append": false, 00:35:19.413 "compare": false, 00:35:19.413 "compare_and_write": false, 00:35:19.413 "abort": false, 00:35:19.413 "seek_hole": false, 00:35:19.413 "seek_data": false, 00:35:19.413 "copy": false, 00:35:19.413 "nvme_iov_md": false 00:35:19.413 }, 00:35:19.413 "driver_specific": { 00:35:19.413 "compress": { 00:35:19.413 "name": "COMP_lvs0/lv0", 00:35:19.413 "base_bdev_name": "4a7d460d-43a5-467d-8510-47dba6db05d3", 00:35:19.413 "pm_path": "/tmp/pmem/e70640a3-ee50-4747-b970-ae223b8a5642" 00:35:19.413 } 00:35:19.413 } 00:35:19.413 } 00:35:19.413 ] 00:35:19.413 16:52:16 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:19.413 16:52:16 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:19.670 [2024-07-24 16:52:16.326323] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:19.670 [2024-07-24 16:52:16.329589] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:35:19.670 Running I/O for 3 seconds... 00:35:22.954 00:35:22.954 Latency(us) 00:35:22.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:22.954 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:22.954 Verification LBA range: start 0x0 length 0x3100 00:35:22.954 COMP_lvs0/lv0 : 3.01 3855.28 15.06 0.00 0.00 8246.11 182.68 14680.06 00:35:22.954 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:22.954 Verification LBA range: start 0x3100 length 0x3100 00:35:22.954 COMP_lvs0/lv0 : 3.01 3922.65 15.32 0.00 0.00 8116.18 172.03 13159.63 00:35:22.954 =================================================================================================================== 00:35:22.954 Total : 7777.92 30.38 0.00 0.00 8180.59 172.03 14680.06 00:35:22.954 0 00:35:22.954 16:52:19 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:22.954 16:52:19 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:22.954 16:52:19 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:23.213 16:52:19 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:23.213 16:52:19 compress_compdev -- compress/compress.sh@78 -- # killprocess 1827984 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1827984 ']' 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1827984 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1827984 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1827984' 00:35:23.213 killing process with pid 1827984 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@969 -- # kill 1827984 00:35:23.213 Received shutdown signal, test time was about 3.000000 seconds 00:35:23.213 00:35:23.213 Latency(us) 00:35:23.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:23.213 =================================================================================================================== 00:35:23.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:23.213 16:52:19 compress_compdev -- common/autotest_common.sh@974 -- # wait 1827984 00:35:27.449 16:52:23 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:35:27.449 16:52:23 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:27.449 16:52:23 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=1830603 00:35:27.449 16:52:23 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:27.449 16:52:23 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:35:27.449 16:52:23 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 1830603 00:35:27.449 16:52:23 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1830603 ']' 00:35:27.449 16:52:23 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.449 16:52:23 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:27.449 16:52:23 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.449 16:52:23 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:27.449 16:52:23 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:27.449 [2024-07-24 16:52:23.584064] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:35:27.449 [2024-07-24 16:52:23.584188] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830603 ] 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:27.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:27.449 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:27.449 [2024-07-24 16:52:23.807526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:27.449 [2024-07-24 16:52:24.098357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.449 [2024-07-24 16:52:24.098424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.449 [2024-07-24 16:52:24.098430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:28.825 [2024-07-24 16:52:25.503576] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:29.391 16:52:26 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:29.392 16:52:26 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:29.392 16:52:26 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:35:29.392 16:52:26 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:29.392 16:52:26 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:32.677 [2024-07-24 16:52:29.344898] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026280 PMD being used: compress_qat 00:35:32.677 16:52:29 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:32.677 16:52:29 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:32.677 16:52:29 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:32.677 16:52:29 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:32.677 16:52:29 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:32.677 16:52:29 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:32.677 16:52:29 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:32.935 16:52:29 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:32.935 [ 00:35:32.935 { 00:35:32.935 "name": "Nvme0n1", 00:35:32.935 "aliases": [ 00:35:32.935 "228f9af0-7c10-4e75-86f7-4afa0315e657" 00:35:32.935 ], 00:35:32.935 "product_name": "NVMe disk", 00:35:32.935 "block_size": 512, 00:35:32.935 "num_blocks": 3907029168, 00:35:32.935 "uuid": "228f9af0-7c10-4e75-86f7-4afa0315e657", 00:35:32.935 "assigned_rate_limits": { 00:35:32.935 "rw_ios_per_sec": 0, 00:35:32.935 "rw_mbytes_per_sec": 0, 00:35:32.935 "r_mbytes_per_sec": 0, 00:35:32.935 "w_mbytes_per_sec": 0 00:35:32.935 }, 00:35:32.935 "claimed": false, 00:35:32.935 "zoned": false, 00:35:32.935 "supported_io_types": { 00:35:32.935 "read": true, 00:35:32.935 "write": true, 00:35:32.935 "unmap": true, 00:35:32.935 "flush": true, 00:35:32.935 "reset": true, 00:35:32.935 "nvme_admin": true, 00:35:32.935 "nvme_io": true, 00:35:32.935 "nvme_io_md": false, 00:35:32.935 "write_zeroes": true, 00:35:32.935 "zcopy": false, 00:35:32.935 "get_zone_info": false, 00:35:32.935 "zone_management": false, 00:35:32.935 "zone_append": false, 00:35:32.935 "compare": false, 00:35:32.935 "compare_and_write": false, 00:35:32.935 "abort": true, 00:35:32.935 "seek_hole": false, 00:35:32.935 "seek_data": false, 00:35:32.935 "copy": false, 00:35:32.935 "nvme_iov_md": false 00:35:32.935 }, 00:35:32.935 "driver_specific": { 00:35:32.935 "nvme": [ 00:35:32.935 { 00:35:32.935 "pci_address": "0000:d8:00.0", 00:35:32.935 "trid": { 00:35:32.935 "trtype": "PCIe", 00:35:32.935 "traddr": "0000:d8:00.0" 00:35:32.935 }, 00:35:32.935 "ctrlr_data": { 00:35:32.935 "cntlid": 0, 00:35:32.935 "vendor_id": "0x8086", 00:35:32.935 "model_number": "INTEL SSDPE2KX020T8", 00:35:32.935 "serial_number": "BTLJ125505KA2P0BGN", 00:35:32.935 "firmware_revision": "VDV10170", 00:35:32.935 "oacs": { 00:35:32.935 "security": 0, 00:35:32.935 "format": 1, 00:35:32.935 "firmware": 1, 00:35:32.935 "ns_manage": 1 00:35:32.935 }, 00:35:32.935 "multi_ctrlr": false, 00:35:32.935 "ana_reporting": false 00:35:32.935 }, 00:35:32.935 "vs": { 00:35:32.935 "nvme_version": "1.2" 00:35:32.935 }, 00:35:32.935 "ns_data": { 00:35:32.935 "id": 1, 00:35:32.935 "can_share": false 00:35:32.935 } 00:35:32.935 } 00:35:32.935 ], 00:35:32.935 "mp_policy": "active_passive" 00:35:32.935 } 00:35:32.935 } 00:35:32.935 ] 00:35:33.194 16:52:29 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:33.194 16:52:29 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:33.194 [2024-07-24 16:52:30.028812] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026440 PMD being used: compress_qat 00:35:34.570 7b56598c-75fb-407b-b761-5297b60fdcab 00:35:34.570 16:52:31 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:34.570 bdd21483-067c-4f73-ab95-50c4d680e75e 00:35:34.570 16:52:31 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:34.570 16:52:31 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:34.570 16:52:31 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:34.570 16:52:31 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:34.570 16:52:31 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:34.570 16:52:31 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:34.570 16:52:31 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:34.827 16:52:31 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:35.086 [ 00:35:35.086 { 00:35:35.086 "name": "bdd21483-067c-4f73-ab95-50c4d680e75e", 00:35:35.086 "aliases": [ 00:35:35.086 "lvs0/lv0" 00:35:35.086 ], 00:35:35.086 "product_name": "Logical Volume", 00:35:35.086 "block_size": 512, 00:35:35.086 "num_blocks": 204800, 00:35:35.086 "uuid": "bdd21483-067c-4f73-ab95-50c4d680e75e", 00:35:35.086 "assigned_rate_limits": { 00:35:35.086 "rw_ios_per_sec": 0, 00:35:35.086 "rw_mbytes_per_sec": 0, 00:35:35.086 "r_mbytes_per_sec": 0, 00:35:35.086 "w_mbytes_per_sec": 0 00:35:35.086 }, 00:35:35.086 "claimed": false, 00:35:35.086 "zoned": false, 00:35:35.086 "supported_io_types": { 00:35:35.086 "read": true, 00:35:35.086 "write": true, 00:35:35.086 "unmap": true, 00:35:35.086 "flush": false, 00:35:35.086 "reset": true, 00:35:35.086 "nvme_admin": false, 00:35:35.086 "nvme_io": false, 00:35:35.086 "nvme_io_md": false, 00:35:35.086 "write_zeroes": true, 00:35:35.086 "zcopy": false, 00:35:35.086 "get_zone_info": false, 00:35:35.086 "zone_management": false, 00:35:35.086 "zone_append": false, 00:35:35.086 "compare": false, 00:35:35.086 "compare_and_write": false, 00:35:35.086 "abort": false, 00:35:35.086 "seek_hole": true, 00:35:35.086 "seek_data": true, 00:35:35.086 "copy": false, 00:35:35.086 "nvme_iov_md": false 00:35:35.086 }, 00:35:35.086 "driver_specific": { 00:35:35.086 "lvol": { 00:35:35.086 "lvol_store_uuid": "7b56598c-75fb-407b-b761-5297b60fdcab", 00:35:35.086 "base_bdev": "Nvme0n1", 00:35:35.086 "thin_provision": true, 00:35:35.086 "num_allocated_clusters": 0, 00:35:35.086 "snapshot": false, 00:35:35.086 "clone": false, 00:35:35.086 "esnap_clone": false 00:35:35.086 } 00:35:35.086 } 00:35:35.086 } 00:35:35.086 ] 00:35:35.086 16:52:31 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:35.086 16:52:31 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:35:35.086 16:52:31 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:35:35.344 [2024-07-24 16:52:32.047767] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:35.344 COMP_lvs0/lv0 00:35:35.344 16:52:32 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:35.344 16:52:32 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:35.344 16:52:32 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:35.344 16:52:32 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:35.344 16:52:32 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:35.344 16:52:32 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:35.344 16:52:32 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:35.602 16:52:32 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:35.602 [ 00:35:35.602 { 00:35:35.602 "name": "COMP_lvs0/lv0", 00:35:35.602 "aliases": [ 00:35:35.602 "4d1dbdaf-4aa1-5c4a-bc84-6696daa7450d" 00:35:35.602 ], 00:35:35.602 "product_name": "compress", 00:35:35.602 "block_size": 512, 00:35:35.602 "num_blocks": 200704, 00:35:35.602 "uuid": "4d1dbdaf-4aa1-5c4a-bc84-6696daa7450d", 00:35:35.602 "assigned_rate_limits": { 00:35:35.602 "rw_ios_per_sec": 0, 00:35:35.603 "rw_mbytes_per_sec": 0, 00:35:35.603 "r_mbytes_per_sec": 0, 00:35:35.603 "w_mbytes_per_sec": 0 00:35:35.603 }, 00:35:35.603 "claimed": false, 00:35:35.603 "zoned": false, 00:35:35.603 "supported_io_types": { 00:35:35.603 "read": true, 00:35:35.603 "write": true, 00:35:35.603 "unmap": false, 00:35:35.603 "flush": false, 00:35:35.603 "reset": false, 00:35:35.603 "nvme_admin": false, 00:35:35.603 "nvme_io": false, 00:35:35.603 "nvme_io_md": false, 00:35:35.603 "write_zeroes": true, 00:35:35.603 "zcopy": false, 00:35:35.603 "get_zone_info": false, 00:35:35.603 "zone_management": false, 00:35:35.603 "zone_append": false, 00:35:35.603 "compare": false, 00:35:35.603 "compare_and_write": false, 00:35:35.603 "abort": false, 00:35:35.603 "seek_hole": false, 00:35:35.603 "seek_data": false, 00:35:35.603 "copy": false, 00:35:35.603 "nvme_iov_md": false 00:35:35.603 }, 00:35:35.603 "driver_specific": { 00:35:35.603 "compress": { 00:35:35.603 "name": "COMP_lvs0/lv0", 00:35:35.603 "base_bdev_name": "bdd21483-067c-4f73-ab95-50c4d680e75e", 00:35:35.603 "pm_path": "/tmp/pmem/d0166086-1e4b-4b35-a58c-31116f9dba33" 00:35:35.603 } 00:35:35.603 } 00:35:35.603 } 00:35:35.603 ] 00:35:35.861 16:52:32 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:35.861 16:52:32 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:35.861 [2024-07-24 16:52:32.592421] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000171e0 PMD being used: compress_qat 00:35:35.861 I/O targets: 00:35:35.862 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:35:35.862 00:35:35.862 00:35:35.862 CUnit - A unit testing framework for C - Version 2.1-3 00:35:35.862 http://cunit.sourceforge.net/ 00:35:35.862 00:35:35.862 00:35:35.862 Suite: bdevio tests on: COMP_lvs0/lv0 00:35:35.862 Test: blockdev write read block ...passed 00:35:35.862 Test: blockdev write zeroes read block ...passed 00:35:35.862 Test: blockdev write zeroes read no split ...passed 00:35:35.862 Test: blockdev write zeroes read split ...passed 00:35:36.120 Test: blockdev write zeroes read split partial ...passed 00:35:36.120 Test: blockdev reset ...[2024-07-24 16:52:32.724809] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:35:36.120 passed 00:35:36.120 Test: blockdev write read 8 blocks ...passed 00:35:36.120 Test: blockdev write read size > 128k ...passed 00:35:36.120 Test: blockdev write read invalid size ...passed 00:35:36.120 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:36.120 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:36.120 Test: blockdev write read max offset ...passed 00:35:36.120 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:36.120 Test: blockdev writev readv 8 blocks ...passed 00:35:36.120 Test: blockdev writev readv 30 x 1block ...passed 00:35:36.120 Test: blockdev writev readv block ...passed 00:35:36.120 Test: blockdev writev readv size > 128k ...passed 00:35:36.120 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:36.120 Test: blockdev comparev and writev ...passed 00:35:36.120 Test: blockdev nvme passthru rw ...passed 00:35:36.120 Test: blockdev nvme passthru vendor specific ...passed 00:35:36.120 Test: blockdev nvme admin passthru ...passed 00:35:36.120 Test: blockdev copy ...passed 00:35:36.120 00:35:36.120 Run Summary: Type Total Ran Passed Failed Inactive 00:35:36.120 suites 1 1 n/a 0 0 00:35:36.120 tests 23 23 23 0 0 00:35:36.120 asserts 130 130 130 0 n/a 00:35:36.120 00:35:36.120 Elapsed time = 0.413 seconds 00:35:36.120 0 00:35:36.120 16:52:32 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:35:36.120 16:52:32 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:36.378 16:52:33 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:36.637 16:52:33 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:35:36.637 16:52:33 compress_compdev -- compress/compress.sh@62 -- # killprocess 1830603 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1830603 ']' 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1830603 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1830603 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1830603' 00:35:36.637 killing process with pid 1830603 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@969 -- # kill 1830603 00:35:36.637 16:52:33 compress_compdev -- common/autotest_common.sh@974 -- # wait 1830603 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=1832811 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:40.825 16:52:36 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 1832811 00:35:40.825 16:52:36 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1832811 ']' 00:35:40.825 16:52:36 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.825 16:52:36 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:40.825 16:52:36 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.825 16:52:36 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:40.825 16:52:36 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:40.825 [2024-07-24 16:52:36.940756] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:35:40.825 [2024-07-24 16:52:36.940880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832811 ] 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:40.825 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.825 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:40.826 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.826 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:40.826 [2024-07-24 16:52:37.158314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:40.826 [2024-07-24 16:52:37.429804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.826 [2024-07-24 16:52:37.429806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:42.202 [2024-07-24 16:52:38.829855] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:42.769 16:52:39 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:42.769 16:52:39 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:42.769 16:52:39 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:35:42.769 16:52:39 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:42.769 16:52:39 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:46.051 [2024-07-24 16:52:42.694983] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:46.051 16:52:42 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:46.051 16:52:42 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:46.051 16:52:42 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:46.051 16:52:42 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:46.051 16:52:42 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:46.051 16:52:42 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:46.051 16:52:42 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:46.309 16:52:42 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:46.309 [ 00:35:46.309 { 00:35:46.309 "name": "Nvme0n1", 00:35:46.309 "aliases": [ 00:35:46.309 "404bcd83-54d1-4f0d-976a-982332d979ff" 00:35:46.309 ], 00:35:46.309 "product_name": "NVMe disk", 00:35:46.309 "block_size": 512, 00:35:46.309 "num_blocks": 3907029168, 00:35:46.309 "uuid": "404bcd83-54d1-4f0d-976a-982332d979ff", 00:35:46.309 "assigned_rate_limits": { 00:35:46.309 "rw_ios_per_sec": 0, 00:35:46.309 "rw_mbytes_per_sec": 0, 00:35:46.309 "r_mbytes_per_sec": 0, 00:35:46.309 "w_mbytes_per_sec": 0 00:35:46.309 }, 00:35:46.309 "claimed": false, 00:35:46.309 "zoned": false, 00:35:46.309 "supported_io_types": { 00:35:46.309 "read": true, 00:35:46.309 "write": true, 00:35:46.309 "unmap": true, 00:35:46.309 "flush": true, 00:35:46.309 "reset": true, 00:35:46.309 "nvme_admin": true, 00:35:46.309 "nvme_io": true, 00:35:46.309 "nvme_io_md": false, 00:35:46.309 "write_zeroes": true, 00:35:46.309 "zcopy": false, 00:35:46.309 "get_zone_info": false, 00:35:46.309 "zone_management": false, 00:35:46.309 "zone_append": false, 00:35:46.309 "compare": false, 00:35:46.309 "compare_and_write": false, 00:35:46.309 "abort": true, 00:35:46.309 "seek_hole": false, 00:35:46.309 "seek_data": false, 00:35:46.309 "copy": false, 00:35:46.309 "nvme_iov_md": false 00:35:46.309 }, 00:35:46.309 "driver_specific": { 00:35:46.309 "nvme": [ 00:35:46.309 { 00:35:46.309 "pci_address": "0000:d8:00.0", 00:35:46.309 "trid": { 00:35:46.309 "trtype": "PCIe", 00:35:46.309 "traddr": "0000:d8:00.0" 00:35:46.309 }, 00:35:46.309 "ctrlr_data": { 00:35:46.309 "cntlid": 0, 00:35:46.309 "vendor_id": "0x8086", 00:35:46.309 "model_number": "INTEL SSDPE2KX020T8", 00:35:46.309 "serial_number": "BTLJ125505KA2P0BGN", 00:35:46.309 "firmware_revision": "VDV10170", 00:35:46.309 "oacs": { 00:35:46.309 "security": 0, 00:35:46.309 "format": 1, 00:35:46.309 "firmware": 1, 00:35:46.309 "ns_manage": 1 00:35:46.309 }, 00:35:46.309 "multi_ctrlr": false, 00:35:46.309 "ana_reporting": false 00:35:46.309 }, 00:35:46.309 "vs": { 00:35:46.309 "nvme_version": "1.2" 00:35:46.309 }, 00:35:46.309 "ns_data": { 00:35:46.309 "id": 1, 00:35:46.309 "can_share": false 00:35:46.309 } 00:35:46.309 } 00:35:46.309 ], 00:35:46.309 "mp_policy": "active_passive" 00:35:46.309 } 00:35:46.309 } 00:35:46.309 ] 00:35:46.567 16:52:43 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:46.567 16:52:43 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:46.567 [2024-07-24 16:52:43.398543] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:47.567 016c9893-e256-4be9-bf52-f54549a75f7b 00:35:47.567 16:52:44 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:47.824 14e03c7e-e6eb-4aaa-82ce-9c92732ad4f6 00:35:47.824 16:52:44 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:47.824 16:52:44 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:47.824 16:52:44 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:47.824 16:52:44 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:47.824 16:52:44 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:47.824 16:52:44 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:47.824 16:52:44 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:48.080 16:52:44 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:48.338 [ 00:35:48.338 { 00:35:48.338 "name": "14e03c7e-e6eb-4aaa-82ce-9c92732ad4f6", 00:35:48.338 "aliases": [ 00:35:48.338 "lvs0/lv0" 00:35:48.338 ], 00:35:48.338 "product_name": "Logical Volume", 00:35:48.338 "block_size": 512, 00:35:48.338 "num_blocks": 204800, 00:35:48.338 "uuid": "14e03c7e-e6eb-4aaa-82ce-9c92732ad4f6", 00:35:48.338 "assigned_rate_limits": { 00:35:48.338 "rw_ios_per_sec": 0, 00:35:48.338 "rw_mbytes_per_sec": 0, 00:35:48.338 "r_mbytes_per_sec": 0, 00:35:48.338 "w_mbytes_per_sec": 0 00:35:48.338 }, 00:35:48.338 "claimed": false, 00:35:48.338 "zoned": false, 00:35:48.338 "supported_io_types": { 00:35:48.338 "read": true, 00:35:48.338 "write": true, 00:35:48.338 "unmap": true, 00:35:48.338 "flush": false, 00:35:48.338 "reset": true, 00:35:48.338 "nvme_admin": false, 00:35:48.338 "nvme_io": false, 00:35:48.338 "nvme_io_md": false, 00:35:48.338 "write_zeroes": true, 00:35:48.338 "zcopy": false, 00:35:48.338 "get_zone_info": false, 00:35:48.338 "zone_management": false, 00:35:48.338 "zone_append": false, 00:35:48.338 "compare": false, 00:35:48.338 "compare_and_write": false, 00:35:48.338 "abort": false, 00:35:48.338 "seek_hole": true, 00:35:48.338 "seek_data": true, 00:35:48.338 "copy": false, 00:35:48.338 "nvme_iov_md": false 00:35:48.338 }, 00:35:48.338 "driver_specific": { 00:35:48.338 "lvol": { 00:35:48.338 "lvol_store_uuid": "016c9893-e256-4be9-bf52-f54549a75f7b", 00:35:48.338 "base_bdev": "Nvme0n1", 00:35:48.338 "thin_provision": true, 00:35:48.338 "num_allocated_clusters": 0, 00:35:48.338 "snapshot": false, 00:35:48.338 "clone": false, 00:35:48.338 "esnap_clone": false 00:35:48.338 } 00:35:48.338 } 00:35:48.338 } 00:35:48.338 ] 00:35:48.338 16:52:45 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:48.338 16:52:45 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:35:48.338 16:52:45 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:35:48.611 [2024-07-24 16:52:45.249721] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:48.611 COMP_lvs0/lv0 00:35:48.611 16:52:45 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:48.611 16:52:45 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:48.611 16:52:45 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:48.611 16:52:45 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:48.611 16:52:45 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:48.611 16:52:45 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:48.611 16:52:45 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:48.869 16:52:45 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:48.869 [ 00:35:48.869 { 00:35:48.869 "name": "COMP_lvs0/lv0", 00:35:48.869 "aliases": [ 00:35:48.869 "b8c3f54e-5673-5d6d-ae1c-470e63b4b5cd" 00:35:48.869 ], 00:35:48.869 "product_name": "compress", 00:35:48.869 "block_size": 512, 00:35:48.869 "num_blocks": 200704, 00:35:48.869 "uuid": "b8c3f54e-5673-5d6d-ae1c-470e63b4b5cd", 00:35:48.869 "assigned_rate_limits": { 00:35:48.869 "rw_ios_per_sec": 0, 00:35:48.869 "rw_mbytes_per_sec": 0, 00:35:48.869 "r_mbytes_per_sec": 0, 00:35:48.869 "w_mbytes_per_sec": 0 00:35:48.869 }, 00:35:48.869 "claimed": false, 00:35:48.869 "zoned": false, 00:35:48.869 "supported_io_types": { 00:35:48.869 "read": true, 00:35:48.869 "write": true, 00:35:48.869 "unmap": false, 00:35:48.869 "flush": false, 00:35:48.869 "reset": false, 00:35:48.869 "nvme_admin": false, 00:35:48.869 "nvme_io": false, 00:35:48.869 "nvme_io_md": false, 00:35:48.869 "write_zeroes": true, 00:35:48.869 "zcopy": false, 00:35:48.869 "get_zone_info": false, 00:35:48.869 "zone_management": false, 00:35:48.869 "zone_append": false, 00:35:48.869 "compare": false, 00:35:48.869 "compare_and_write": false, 00:35:48.869 "abort": false, 00:35:48.869 "seek_hole": false, 00:35:48.869 "seek_data": false, 00:35:48.869 "copy": false, 00:35:48.869 "nvme_iov_md": false 00:35:48.869 }, 00:35:48.869 "driver_specific": { 00:35:48.869 "compress": { 00:35:48.869 "name": "COMP_lvs0/lv0", 00:35:48.869 "base_bdev_name": "14e03c7e-e6eb-4aaa-82ce-9c92732ad4f6", 00:35:48.869 "pm_path": "/tmp/pmem/982ef32a-868b-4beb-aab9-6cca6a5af177" 00:35:48.869 } 00:35:48.869 } 00:35:48.869 } 00:35:48.869 ] 00:35:48.869 16:52:45 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:48.869 16:52:45 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:49.127 [2024-07-24 16:52:45.825801] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:49.127 [2024-07-24 16:52:45.829024] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:35:49.127 Running I/O for 30 seconds... 00:36:21.199 00:36:21.200 Latency(us) 00:36:21.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.200 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:36:21.200 Verification LBA range: start 0x0 length 0xc40 00:36:21.200 COMP_lvs0/lv0 : 30.01 1672.98 26.14 0.00 0.00 38025.70 365.36 31876.71 00:36:21.200 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:36:21.200 Verification LBA range: start 0xc40 length 0xc40 00:36:21.200 COMP_lvs0/lv0 : 30.01 5258.56 82.17 0.00 0.00 12061.29 335.87 21286.09 00:36:21.200 =================================================================================================================== 00:36:21.200 Total : 6931.54 108.31 0.00 0.00 18328.63 335.87 31876.71 00:36:21.200 0 00:36:21.200 16:53:15 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:36:21.200 16:53:15 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:21.200 16:53:16 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:21.200 16:53:16 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:21.200 16:53:16 compress_compdev -- compress/compress.sh@78 -- # killprocess 1832811 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1832811 ']' 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1832811 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1832811 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1832811' 00:36:21.200 killing process with pid 1832811 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@969 -- # kill 1832811 00:36:21.200 Received shutdown signal, test time was about 30.000000 seconds 00:36:21.200 00:36:21.200 Latency(us) 00:36:21.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:21.200 =================================================================================================================== 00:36:21.200 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:21.200 16:53:16 compress_compdev -- common/autotest_common.sh@974 -- # wait 1832811 00:36:23.730 16:53:19 compress_compdev -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:36:23.730 16:53:19 compress_compdev -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:36:23.730 16:53:19 compress_compdev -- compress/compress.sh@96 -- # NET_TYPE=virt 00:36:23.730 16:53:19 compress_compdev -- compress/compress.sh@96 -- # nvmftestinit 00:36:23.730 16:53:19 compress_compdev -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:23.730 16:53:19 compress_compdev -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:23.730 16:53:19 compress_compdev -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:23.730 16:53:19 compress_compdev -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:23.730 16:53:19 compress_compdev -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:23.730 16:53:19 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:23.730 16:53:19 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:23.730 16:53:19 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:23.730 Cannot find device "nvmf_tgt_br" 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@155 -- # true 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:23.730 Cannot find device "nvmf_tgt_br2" 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@156 -- # true 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:23.730 Cannot find device "nvmf_tgt_br" 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@158 -- # true 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:23.730 Cannot find device "nvmf_tgt_br2" 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@159 -- # true 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:23.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@162 -- # true 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:23.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@163 -- # true 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:23.730 16:53:20 compress_compdev -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:23.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:36:23.731 00:36:23.731 --- 10.0.0.2 ping statistics --- 00:36:23.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.731 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:23.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:23.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:36:23.731 00:36:23.731 --- 10.0.0.3 ping statistics --- 00:36:23.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.731 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:23.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:36:23.731 00:36:23.731 --- 10.0.0.1 ping statistics --- 00:36:23.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.731 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@433 -- # return 0 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:23.731 16:53:20 compress_compdev -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:23.989 16:53:20 compress_compdev -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:36:23.989 16:53:20 compress_compdev -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:23.989 16:53:20 compress_compdev -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:23.989 16:53:20 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:23.989 16:53:20 compress_compdev -- nvmf/common.sh@481 -- # nvmfpid=1840063 00:36:23.989 16:53:20 compress_compdev -- nvmf/common.sh@482 -- # waitforlisten 1840063 00:36:23.989 16:53:20 compress_compdev -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:36:23.990 16:53:20 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 1840063 ']' 00:36:23.990 16:53:20 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.990 16:53:20 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:23.990 16:53:20 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.990 16:53:20 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:23.990 16:53:20 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:23.990 [2024-07-24 16:53:20.724192] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:36:23.990 [2024-07-24 16:53:20.724308] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.247 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:24.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.247 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:24.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:24.248 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:24.248 [2024-07-24 16:53:20.959451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:24.505 [2024-07-24 16:53:21.251630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.505 [2024-07-24 16:53:21.251683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.505 [2024-07-24 16:53:21.251703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.505 [2024-07-24 16:53:21.251718] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.505 [2024-07-24 16:53:21.251734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.505 [2024-07-24 16:53:21.251828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.505 [2024-07-24 16:53:21.251897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.505 [2024-07-24 16:53:21.251903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:25.070 16:53:21 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:25.070 16:53:21 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:36:25.070 16:53:21 compress_compdev -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:25.070 16:53:21 compress_compdev -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:25.070 16:53:21 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:25.070 16:53:21 compress_compdev -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:25.070 16:53:21 compress_compdev -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:25.070 16:53:21 compress_compdev -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:36:25.328 [2024-07-24 16:53:22.061936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:25.328 16:53:22 compress_compdev -- compress/compress.sh@102 -- # create_vols 00:36:25.328 16:53:22 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:25.328 16:53:22 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:28.612 16:53:25 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:28.612 16:53:25 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:36:28.612 16:53:25 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:28.612 16:53:25 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:28.612 16:53:25 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:28.612 16:53:25 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:28.612 16:53:25 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:28.870 16:53:25 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:29.128 [ 00:36:29.129 { 00:36:29.129 "name": "Nvme0n1", 00:36:29.129 "aliases": [ 00:36:29.129 "e2f17eab-6351-4919-b0e1-fbbe69f8fc55" 00:36:29.129 ], 00:36:29.129 "product_name": "NVMe disk", 00:36:29.129 "block_size": 512, 00:36:29.129 "num_blocks": 3907029168, 00:36:29.129 "uuid": "e2f17eab-6351-4919-b0e1-fbbe69f8fc55", 00:36:29.129 "assigned_rate_limits": { 00:36:29.129 "rw_ios_per_sec": 0, 00:36:29.129 "rw_mbytes_per_sec": 0, 00:36:29.129 "r_mbytes_per_sec": 0, 00:36:29.129 "w_mbytes_per_sec": 0 00:36:29.129 }, 00:36:29.129 "claimed": false, 00:36:29.129 "zoned": false, 00:36:29.129 "supported_io_types": { 00:36:29.129 "read": true, 00:36:29.129 "write": true, 00:36:29.129 "unmap": true, 00:36:29.129 "flush": true, 00:36:29.129 "reset": true, 00:36:29.129 "nvme_admin": true, 00:36:29.129 "nvme_io": true, 00:36:29.129 "nvme_io_md": false, 00:36:29.129 "write_zeroes": true, 00:36:29.129 "zcopy": false, 00:36:29.129 "get_zone_info": false, 00:36:29.129 "zone_management": false, 00:36:29.129 "zone_append": false, 00:36:29.129 "compare": false, 00:36:29.129 "compare_and_write": false, 00:36:29.129 "abort": true, 00:36:29.129 "seek_hole": false, 00:36:29.129 "seek_data": false, 00:36:29.129 "copy": false, 00:36:29.129 "nvme_iov_md": false 00:36:29.129 }, 00:36:29.129 "driver_specific": { 00:36:29.129 "nvme": [ 00:36:29.129 { 00:36:29.129 "pci_address": "0000:d8:00.0", 00:36:29.129 "trid": { 00:36:29.129 "trtype": "PCIe", 00:36:29.129 "traddr": "0000:d8:00.0" 00:36:29.129 }, 00:36:29.129 "ctrlr_data": { 00:36:29.129 "cntlid": 0, 00:36:29.129 "vendor_id": "0x8086", 00:36:29.129 "model_number": "INTEL SSDPE2KX020T8", 00:36:29.129 "serial_number": "BTLJ125505KA2P0BGN", 00:36:29.129 "firmware_revision": "VDV10170", 00:36:29.129 "oacs": { 00:36:29.129 "security": 0, 00:36:29.129 "format": 1, 00:36:29.129 "firmware": 1, 00:36:29.129 "ns_manage": 1 00:36:29.129 }, 00:36:29.129 "multi_ctrlr": false, 00:36:29.129 "ana_reporting": false 00:36:29.129 }, 00:36:29.129 "vs": { 00:36:29.129 "nvme_version": "1.2" 00:36:29.129 }, 00:36:29.129 "ns_data": { 00:36:29.129 "id": 1, 00:36:29.129 "can_share": false 00:36:29.129 } 00:36:29.129 } 00:36:29.129 ], 00:36:29.129 "mp_policy": "active_passive" 00:36:29.129 } 00:36:29.129 } 00:36:29.129 ] 00:36:29.129 16:53:25 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:29.129 16:53:25 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:30.503 afb7013f-b964-4ca0-9ab4-0b6dbbc4d2ef 00:36:30.503 16:53:27 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:30.503 15483139-672b-4093-b065-a2f2fd4af55e 00:36:30.503 16:53:27 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:30.503 16:53:27 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:36:30.503 16:53:27 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:30.503 16:53:27 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:30.503 16:53:27 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:30.503 16:53:27 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:30.503 16:53:27 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:30.762 16:53:27 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:31.020 [ 00:36:31.020 { 00:36:31.020 "name": "15483139-672b-4093-b065-a2f2fd4af55e", 00:36:31.020 "aliases": [ 00:36:31.020 "lvs0/lv0" 00:36:31.020 ], 00:36:31.020 "product_name": "Logical Volume", 00:36:31.021 "block_size": 512, 00:36:31.021 "num_blocks": 204800, 00:36:31.021 "uuid": "15483139-672b-4093-b065-a2f2fd4af55e", 00:36:31.021 "assigned_rate_limits": { 00:36:31.021 "rw_ios_per_sec": 0, 00:36:31.021 "rw_mbytes_per_sec": 0, 00:36:31.021 "r_mbytes_per_sec": 0, 00:36:31.021 "w_mbytes_per_sec": 0 00:36:31.021 }, 00:36:31.021 "claimed": false, 00:36:31.021 "zoned": false, 00:36:31.021 "supported_io_types": { 00:36:31.021 "read": true, 00:36:31.021 "write": true, 00:36:31.021 "unmap": true, 00:36:31.021 "flush": false, 00:36:31.021 "reset": true, 00:36:31.021 "nvme_admin": false, 00:36:31.021 "nvme_io": false, 00:36:31.021 "nvme_io_md": false, 00:36:31.021 "write_zeroes": true, 00:36:31.021 "zcopy": false, 00:36:31.021 "get_zone_info": false, 00:36:31.021 "zone_management": false, 00:36:31.021 "zone_append": false, 00:36:31.021 "compare": false, 00:36:31.021 "compare_and_write": false, 00:36:31.021 "abort": false, 00:36:31.021 "seek_hole": true, 00:36:31.021 "seek_data": true, 00:36:31.021 "copy": false, 00:36:31.021 "nvme_iov_md": false 00:36:31.021 }, 00:36:31.021 "driver_specific": { 00:36:31.021 "lvol": { 00:36:31.021 "lvol_store_uuid": "afb7013f-b964-4ca0-9ab4-0b6dbbc4d2ef", 00:36:31.021 "base_bdev": "Nvme0n1", 00:36:31.021 "thin_provision": true, 00:36:31.021 "num_allocated_clusters": 0, 00:36:31.021 "snapshot": false, 00:36:31.021 "clone": false, 00:36:31.021 "esnap_clone": false 00:36:31.021 } 00:36:31.021 } 00:36:31.021 } 00:36:31.021 ] 00:36:31.021 16:53:27 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:31.021 16:53:27 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:31.021 16:53:27 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:31.279 [2024-07-24 16:53:27.935738] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:31.279 COMP_lvs0/lv0 00:36:31.279 16:53:27 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:31.279 16:53:27 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:36:31.279 16:53:27 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:31.279 16:53:27 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:31.279 16:53:27 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:31.279 16:53:27 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:31.279 16:53:27 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:31.540 16:53:28 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:31.540 [ 00:36:31.540 { 00:36:31.540 "name": "COMP_lvs0/lv0", 00:36:31.540 "aliases": [ 00:36:31.540 "5f98a15f-8cbe-5cc2-be3a-0e530549bad1" 00:36:31.540 ], 00:36:31.540 "product_name": "compress", 00:36:31.540 "block_size": 512, 00:36:31.540 "num_blocks": 200704, 00:36:31.540 "uuid": "5f98a15f-8cbe-5cc2-be3a-0e530549bad1", 00:36:31.540 "assigned_rate_limits": { 00:36:31.540 "rw_ios_per_sec": 0, 00:36:31.540 "rw_mbytes_per_sec": 0, 00:36:31.540 "r_mbytes_per_sec": 0, 00:36:31.540 "w_mbytes_per_sec": 0 00:36:31.540 }, 00:36:31.540 "claimed": false, 00:36:31.540 "zoned": false, 00:36:31.540 "supported_io_types": { 00:36:31.540 "read": true, 00:36:31.540 "write": true, 00:36:31.540 "unmap": false, 00:36:31.540 "flush": false, 00:36:31.540 "reset": false, 00:36:31.540 "nvme_admin": false, 00:36:31.540 "nvme_io": false, 00:36:31.540 "nvme_io_md": false, 00:36:31.540 "write_zeroes": true, 00:36:31.540 "zcopy": false, 00:36:31.540 "get_zone_info": false, 00:36:31.540 "zone_management": false, 00:36:31.540 "zone_append": false, 00:36:31.540 "compare": false, 00:36:31.540 "compare_and_write": false, 00:36:31.540 "abort": false, 00:36:31.540 "seek_hole": false, 00:36:31.540 "seek_data": false, 00:36:31.540 "copy": false, 00:36:31.540 "nvme_iov_md": false 00:36:31.540 }, 00:36:31.540 "driver_specific": { 00:36:31.540 "compress": { 00:36:31.540 "name": "COMP_lvs0/lv0", 00:36:31.540 "base_bdev_name": "15483139-672b-4093-b065-a2f2fd4af55e", 00:36:31.540 "pm_path": "/tmp/pmem/c12db172-7461-4f9b-bad3-dd06b8355b7b" 00:36:31.540 } 00:36:31.540 } 00:36:31.540 } 00:36:31.540 ] 00:36:31.862 16:53:28 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:31.862 16:53:28 compress_compdev -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:31.862 16:53:28 compress_compdev -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:36:32.121 16:53:28 compress_compdev -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:32.380 [2024-07-24 16:53:29.076753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.380 16:53:29 compress_compdev -- compress/compress.sh@109 -- # perf_pid=1841527 00:36:32.380 16:53:29 compress_compdev -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:36:32.380 16:53:29 compress_compdev -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:32.380 16:53:29 compress_compdev -- compress/compress.sh@113 -- # wait 1841527 00:36:32.639 [2024-07-24 16:53:29.391934] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:04.713 Initializing NVMe Controllers 00:37:04.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:04.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:04.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:04.713 Initialization complete. Launching workers. 00:37:04.713 ======================================================== 00:37:04.713 Latency(us) 00:37:04.713 Device Information : IOPS MiB/s Average min max 00:37:04.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 4529.77 17.69 14130.70 1772.00 30897.94 00:37:04.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2831.23 11.06 22608.39 3374.37 42075.29 00:37:04.713 ======================================================== 00:37:04.713 Total : 7361.00 28.75 17391.44 1772.00 42075.29 00:37:04.713 00:37:04.713 16:53:59 compress_compdev -- compress/compress.sh@114 -- # destroy_vols 00:37:04.713 16:53:59 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:04.713 16:53:59 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:04.713 16:54:00 compress_compdev -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:37:04.713 16:54:00 compress_compdev -- compress/compress.sh@117 -- # nvmftestfini 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@117 -- # sync 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@120 -- # set +e 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:04.713 rmmod nvme_tcp 00:37:04.713 rmmod nvme_fabrics 00:37:04.713 rmmod nvme_keyring 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@124 -- # set -e 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@125 -- # return 0 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@489 -- # '[' -n 1840063 ']' 00:37:04.713 16:54:00 compress_compdev -- nvmf/common.sh@490 -- # killprocess 1840063 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 1840063 ']' 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 1840063 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1840063 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1840063' 00:37:04.713 killing process with pid 1840063 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@969 -- # kill 1840063 00:37:04.713 16:54:00 compress_compdev -- common/autotest_common.sh@974 -- # wait 1840063 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:07.255 16:54:03 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:07.255 16:54:03 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:07.255 16:54:03 compress_compdev -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:07.255 16:54:03 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:37:07.255 00:37:07.255 real 2m29.153s 00:37:07.255 user 6m35.901s 00:37:07.255 sys 0m22.230s 00:37:07.255 16:54:03 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:07.255 16:54:03 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:37:07.255 ************************************ 00:37:07.255 END TEST compress_compdev 00:37:07.255 ************************************ 00:37:07.255 16:54:04 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:37:07.255 16:54:04 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:07.255 16:54:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:07.255 16:54:04 -- common/autotest_common.sh@10 -- # set +x 00:37:07.255 ************************************ 00:37:07.255 START TEST compress_isal 00:37:07.255 ************************************ 00:37:07.255 16:54:04 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:37:07.515 * Looking for test storage... 00:37:07.515 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:37:07.515 16:54:04 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:07.515 16:54:04 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:07.515 16:54:04 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:07.515 16:54:04 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.515 16:54:04 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.515 16:54:04 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.515 16:54:04 compress_isal -- paths/export.sh@5 -- # export PATH 00:37:07.515 16:54:04 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@47 -- # : 0 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:07.515 16:54:04 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1847391 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1847391 00:37:07.515 16:54:04 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1847391 ']' 00:37:07.515 16:54:04 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:07.515 16:54:04 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.515 16:54:04 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:07.515 16:54:04 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.515 16:54:04 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:07.515 16:54:04 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:07.515 [2024-07-24 16:54:04.356880] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:37:07.515 [2024-07-24 16:54:04.357002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847391 ] 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:07.775 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:07.775 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:07.775 [2024-07-24 16:54:04.570463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:08.034 [2024-07-24 16:54:04.841156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.034 [2024-07-24 16:54:04.841165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:08.603 16:54:05 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:08.603 16:54:05 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:08.603 16:54:05 compress_isal -- compress/compress.sh@74 -- # create_vols 00:37:08.603 16:54:05 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:08.603 16:54:05 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:11.893 16:54:08 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:11.893 16:54:08 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:12.152 [ 00:37:12.152 { 00:37:12.152 "name": "Nvme0n1", 00:37:12.152 "aliases": [ 00:37:12.152 "8960f96b-56a8-485d-8bd3-9dbf3394cc1d" 00:37:12.152 ], 00:37:12.152 "product_name": "NVMe disk", 00:37:12.152 "block_size": 512, 00:37:12.152 "num_blocks": 3907029168, 00:37:12.152 "uuid": "8960f96b-56a8-485d-8bd3-9dbf3394cc1d", 00:37:12.152 "assigned_rate_limits": { 00:37:12.152 "rw_ios_per_sec": 0, 00:37:12.152 "rw_mbytes_per_sec": 0, 00:37:12.152 "r_mbytes_per_sec": 0, 00:37:12.152 "w_mbytes_per_sec": 0 00:37:12.152 }, 00:37:12.152 "claimed": false, 00:37:12.152 "zoned": false, 00:37:12.152 "supported_io_types": { 00:37:12.152 "read": true, 00:37:12.152 "write": true, 00:37:12.152 "unmap": true, 00:37:12.152 "flush": true, 00:37:12.152 "reset": true, 00:37:12.152 "nvme_admin": true, 00:37:12.152 "nvme_io": true, 00:37:12.152 "nvme_io_md": false, 00:37:12.152 "write_zeroes": true, 00:37:12.152 "zcopy": false, 00:37:12.152 "get_zone_info": false, 00:37:12.152 "zone_management": false, 00:37:12.152 "zone_append": false, 00:37:12.152 "compare": false, 00:37:12.152 "compare_and_write": false, 00:37:12.152 "abort": true, 00:37:12.152 "seek_hole": false, 00:37:12.152 "seek_data": false, 00:37:12.152 "copy": false, 00:37:12.152 "nvme_iov_md": false 00:37:12.152 }, 00:37:12.152 "driver_specific": { 00:37:12.152 "nvme": [ 00:37:12.152 { 00:37:12.152 "pci_address": "0000:d8:00.0", 00:37:12.152 "trid": { 00:37:12.152 "trtype": "PCIe", 00:37:12.152 "traddr": "0000:d8:00.0" 00:37:12.152 }, 00:37:12.152 "ctrlr_data": { 00:37:12.152 "cntlid": 0, 00:37:12.152 "vendor_id": "0x8086", 00:37:12.152 "model_number": "INTEL SSDPE2KX020T8", 00:37:12.152 "serial_number": "BTLJ125505KA2P0BGN", 00:37:12.152 "firmware_revision": "VDV10170", 00:37:12.152 "oacs": { 00:37:12.152 "security": 0, 00:37:12.152 "format": 1, 00:37:12.152 "firmware": 1, 00:37:12.152 "ns_manage": 1 00:37:12.152 }, 00:37:12.152 "multi_ctrlr": false, 00:37:12.152 "ana_reporting": false 00:37:12.152 }, 00:37:12.152 "vs": { 00:37:12.152 "nvme_version": "1.2" 00:37:12.152 }, 00:37:12.152 "ns_data": { 00:37:12.153 "id": 1, 00:37:12.153 "can_share": false 00:37:12.153 } 00:37:12.153 } 00:37:12.153 ], 00:37:12.153 "mp_policy": "active_passive" 00:37:12.153 } 00:37:12.153 } 00:37:12.153 ] 00:37:12.153 16:54:08 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:12.153 16:54:08 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:13.530 59c6c764-667b-4ff3-95a8-f4f53263a723 00:37:13.530 16:54:10 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:13.789 d099f646-d251-4353-a40d-3cba2f05c83d 00:37:13.789 16:54:10 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:13.789 16:54:10 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:14.048 [ 00:37:14.048 { 00:37:14.048 "name": "d099f646-d251-4353-a40d-3cba2f05c83d", 00:37:14.048 "aliases": [ 00:37:14.048 "lvs0/lv0" 00:37:14.048 ], 00:37:14.048 "product_name": "Logical Volume", 00:37:14.048 "block_size": 512, 00:37:14.048 "num_blocks": 204800, 00:37:14.048 "uuid": "d099f646-d251-4353-a40d-3cba2f05c83d", 00:37:14.048 "assigned_rate_limits": { 00:37:14.048 "rw_ios_per_sec": 0, 00:37:14.048 "rw_mbytes_per_sec": 0, 00:37:14.048 "r_mbytes_per_sec": 0, 00:37:14.048 "w_mbytes_per_sec": 0 00:37:14.048 }, 00:37:14.048 "claimed": false, 00:37:14.048 "zoned": false, 00:37:14.048 "supported_io_types": { 00:37:14.048 "read": true, 00:37:14.048 "write": true, 00:37:14.048 "unmap": true, 00:37:14.048 "flush": false, 00:37:14.048 "reset": true, 00:37:14.048 "nvme_admin": false, 00:37:14.048 "nvme_io": false, 00:37:14.048 "nvme_io_md": false, 00:37:14.048 "write_zeroes": true, 00:37:14.048 "zcopy": false, 00:37:14.048 "get_zone_info": false, 00:37:14.048 "zone_management": false, 00:37:14.048 "zone_append": false, 00:37:14.048 "compare": false, 00:37:14.049 "compare_and_write": false, 00:37:14.049 "abort": false, 00:37:14.049 "seek_hole": true, 00:37:14.049 "seek_data": true, 00:37:14.049 "copy": false, 00:37:14.049 "nvme_iov_md": false 00:37:14.049 }, 00:37:14.049 "driver_specific": { 00:37:14.049 "lvol": { 00:37:14.049 "lvol_store_uuid": "59c6c764-667b-4ff3-95a8-f4f53263a723", 00:37:14.049 "base_bdev": "Nvme0n1", 00:37:14.049 "thin_provision": true, 00:37:14.049 "num_allocated_clusters": 0, 00:37:14.049 "snapshot": false, 00:37:14.049 "clone": false, 00:37:14.049 "esnap_clone": false 00:37:14.049 } 00:37:14.049 } 00:37:14.049 } 00:37:14.049 ] 00:37:14.049 16:54:10 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:14.049 16:54:10 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:37:14.049 16:54:10 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:37:14.327 [2024-07-24 16:54:11.071952] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:14.327 COMP_lvs0/lv0 00:37:14.327 16:54:11 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:14.327 16:54:11 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:14.327 16:54:11 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:14.327 16:54:11 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:14.327 16:54:11 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:14.327 16:54:11 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:14.327 16:54:11 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:14.607 16:54:11 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:14.867 [ 00:37:14.867 { 00:37:14.867 "name": "COMP_lvs0/lv0", 00:37:14.867 "aliases": [ 00:37:14.867 "ebe5640f-00a6-5fdf-92e7-424622363992" 00:37:14.867 ], 00:37:14.867 "product_name": "compress", 00:37:14.867 "block_size": 512, 00:37:14.867 "num_blocks": 200704, 00:37:14.867 "uuid": "ebe5640f-00a6-5fdf-92e7-424622363992", 00:37:14.867 "assigned_rate_limits": { 00:37:14.867 "rw_ios_per_sec": 0, 00:37:14.867 "rw_mbytes_per_sec": 0, 00:37:14.867 "r_mbytes_per_sec": 0, 00:37:14.867 "w_mbytes_per_sec": 0 00:37:14.867 }, 00:37:14.867 "claimed": false, 00:37:14.867 "zoned": false, 00:37:14.867 "supported_io_types": { 00:37:14.867 "read": true, 00:37:14.867 "write": true, 00:37:14.867 "unmap": false, 00:37:14.867 "flush": false, 00:37:14.867 "reset": false, 00:37:14.867 "nvme_admin": false, 00:37:14.867 "nvme_io": false, 00:37:14.867 "nvme_io_md": false, 00:37:14.867 "write_zeroes": true, 00:37:14.867 "zcopy": false, 00:37:14.867 "get_zone_info": false, 00:37:14.867 "zone_management": false, 00:37:14.867 "zone_append": false, 00:37:14.867 "compare": false, 00:37:14.867 "compare_and_write": false, 00:37:14.867 "abort": false, 00:37:14.867 "seek_hole": false, 00:37:14.867 "seek_data": false, 00:37:14.867 "copy": false, 00:37:14.867 "nvme_iov_md": false 00:37:14.867 }, 00:37:14.867 "driver_specific": { 00:37:14.867 "compress": { 00:37:14.867 "name": "COMP_lvs0/lv0", 00:37:14.867 "base_bdev_name": "d099f646-d251-4353-a40d-3cba2f05c83d", 00:37:14.867 "pm_path": "/tmp/pmem/4ebb1f60-2915-410b-8688-9f2645d37a21" 00:37:14.867 } 00:37:14.867 } 00:37:14.867 } 00:37:14.867 ] 00:37:14.867 16:54:11 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:14.867 16:54:11 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:14.867 Running I/O for 3 seconds... 00:37:18.155 00:37:18.155 Latency(us) 00:37:18.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.155 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:18.155 Verification LBA range: start 0x0 length 0x3100 00:37:18.155 COMP_lvs0/lv0 : 3.01 3180.16 12.42 0.00 0.00 10006.37 60.62 16462.64 00:37:18.155 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:18.155 Verification LBA range: start 0x3100 length 0x3100 00:37:18.155 COMP_lvs0/lv0 : 3.01 3166.64 12.37 0.00 0.00 10061.61 60.62 17196.65 00:37:18.155 =================================================================================================================== 00:37:18.155 Total : 6346.81 24.79 0.00 0.00 10033.93 60.62 17196.65 00:37:18.155 0 00:37:18.155 16:54:14 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:18.155 16:54:14 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:18.155 16:54:14 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:18.414 16:54:15 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:18.414 16:54:15 compress_isal -- compress/compress.sh@78 -- # killprocess 1847391 00:37:18.414 16:54:15 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1847391 ']' 00:37:18.414 16:54:15 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1847391 00:37:18.414 16:54:15 compress_isal -- common/autotest_common.sh@955 -- # uname 00:37:18.414 16:54:15 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:18.414 16:54:15 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1847391 00:37:18.673 16:54:15 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:18.673 16:54:15 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:18.673 16:54:15 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1847391' 00:37:18.673 killing process with pid 1847391 00:37:18.673 16:54:15 compress_isal -- common/autotest_common.sh@969 -- # kill 1847391 00:37:18.673 Received shutdown signal, test time was about 3.000000 seconds 00:37:18.673 00:37:18.673 Latency(us) 00:37:18.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:18.673 =================================================================================================================== 00:37:18.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:18.673 16:54:15 compress_isal -- common/autotest_common.sh@974 -- # wait 1847391 00:37:22.869 16:54:19 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:37:22.869 16:54:19 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:22.869 16:54:19 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1850205 00:37:22.869 16:54:19 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:22.869 16:54:19 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:22.869 16:54:19 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1850205 00:37:22.869 16:54:19 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1850205 ']' 00:37:22.869 16:54:19 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:22.869 16:54:19 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:22.869 16:54:19 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:22.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:22.869 16:54:19 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:22.869 16:54:19 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:22.869 [2024-07-24 16:54:19.420092] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:37:22.869 [2024-07-24 16:54:19.420229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850205 ] 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:22.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.869 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:22.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:22.870 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:22.870 [2024-07-24 16:54:19.633983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:23.130 [2024-07-24 16:54:19.912692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:23.130 [2024-07-24 16:54:19.912694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:23.698 16:54:20 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:23.698 16:54:20 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:23.698 16:54:20 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:37:23.698 16:54:20 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:23.698 16:54:20 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:26.986 16:54:23 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:26.986 16:54:23 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:27.246 [ 00:37:27.246 { 00:37:27.246 "name": "Nvme0n1", 00:37:27.246 "aliases": [ 00:37:27.246 "8a6f0ecf-3ecb-45e2-aca9-db8a6e5faca3" 00:37:27.246 ], 00:37:27.246 "product_name": "NVMe disk", 00:37:27.246 "block_size": 512, 00:37:27.246 "num_blocks": 3907029168, 00:37:27.246 "uuid": "8a6f0ecf-3ecb-45e2-aca9-db8a6e5faca3", 00:37:27.246 "assigned_rate_limits": { 00:37:27.246 "rw_ios_per_sec": 0, 00:37:27.246 "rw_mbytes_per_sec": 0, 00:37:27.246 "r_mbytes_per_sec": 0, 00:37:27.246 "w_mbytes_per_sec": 0 00:37:27.246 }, 00:37:27.246 "claimed": false, 00:37:27.246 "zoned": false, 00:37:27.246 "supported_io_types": { 00:37:27.246 "read": true, 00:37:27.246 "write": true, 00:37:27.246 "unmap": true, 00:37:27.246 "flush": true, 00:37:27.246 "reset": true, 00:37:27.246 "nvme_admin": true, 00:37:27.246 "nvme_io": true, 00:37:27.246 "nvme_io_md": false, 00:37:27.246 "write_zeroes": true, 00:37:27.246 "zcopy": false, 00:37:27.246 "get_zone_info": false, 00:37:27.246 "zone_management": false, 00:37:27.246 "zone_append": false, 00:37:27.246 "compare": false, 00:37:27.246 "compare_and_write": false, 00:37:27.246 "abort": true, 00:37:27.246 "seek_hole": false, 00:37:27.246 "seek_data": false, 00:37:27.246 "copy": false, 00:37:27.246 "nvme_iov_md": false 00:37:27.246 }, 00:37:27.246 "driver_specific": { 00:37:27.246 "nvme": [ 00:37:27.246 { 00:37:27.246 "pci_address": "0000:d8:00.0", 00:37:27.246 "trid": { 00:37:27.246 "trtype": "PCIe", 00:37:27.246 "traddr": "0000:d8:00.0" 00:37:27.246 }, 00:37:27.246 "ctrlr_data": { 00:37:27.246 "cntlid": 0, 00:37:27.246 "vendor_id": "0x8086", 00:37:27.246 "model_number": "INTEL SSDPE2KX020T8", 00:37:27.246 "serial_number": "BTLJ125505KA2P0BGN", 00:37:27.246 "firmware_revision": "VDV10170", 00:37:27.246 "oacs": { 00:37:27.246 "security": 0, 00:37:27.246 "format": 1, 00:37:27.246 "firmware": 1, 00:37:27.246 "ns_manage": 1 00:37:27.246 }, 00:37:27.246 "multi_ctrlr": false, 00:37:27.246 "ana_reporting": false 00:37:27.246 }, 00:37:27.246 "vs": { 00:37:27.246 "nvme_version": "1.2" 00:37:27.246 }, 00:37:27.246 "ns_data": { 00:37:27.246 "id": 1, 00:37:27.246 "can_share": false 00:37:27.246 } 00:37:27.246 } 00:37:27.246 ], 00:37:27.246 "mp_policy": "active_passive" 00:37:27.246 } 00:37:27.246 } 00:37:27.246 ] 00:37:27.246 16:54:24 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:27.246 16:54:24 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:28.625 2e60ca8c-73cc-42aa-ab1f-c239f3ed0592 00:37:28.625 16:54:25 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:28.884 0145749e-f718-4ef8-b717-d0606b47b658 00:37:28.884 16:54:25 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:28.884 16:54:25 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:28.884 16:54:25 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:28.884 16:54:25 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:28.884 16:54:25 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:28.884 16:54:25 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:28.884 16:54:25 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:29.143 16:54:25 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:29.144 [ 00:37:29.144 { 00:37:29.144 "name": "0145749e-f718-4ef8-b717-d0606b47b658", 00:37:29.144 "aliases": [ 00:37:29.144 "lvs0/lv0" 00:37:29.144 ], 00:37:29.144 "product_name": "Logical Volume", 00:37:29.144 "block_size": 512, 00:37:29.144 "num_blocks": 204800, 00:37:29.144 "uuid": "0145749e-f718-4ef8-b717-d0606b47b658", 00:37:29.144 "assigned_rate_limits": { 00:37:29.144 "rw_ios_per_sec": 0, 00:37:29.144 "rw_mbytes_per_sec": 0, 00:37:29.144 "r_mbytes_per_sec": 0, 00:37:29.144 "w_mbytes_per_sec": 0 00:37:29.144 }, 00:37:29.144 "claimed": false, 00:37:29.144 "zoned": false, 00:37:29.144 "supported_io_types": { 00:37:29.144 "read": true, 00:37:29.144 "write": true, 00:37:29.144 "unmap": true, 00:37:29.144 "flush": false, 00:37:29.144 "reset": true, 00:37:29.144 "nvme_admin": false, 00:37:29.144 "nvme_io": false, 00:37:29.144 "nvme_io_md": false, 00:37:29.144 "write_zeroes": true, 00:37:29.144 "zcopy": false, 00:37:29.144 "get_zone_info": false, 00:37:29.144 "zone_management": false, 00:37:29.144 "zone_append": false, 00:37:29.144 "compare": false, 00:37:29.144 "compare_and_write": false, 00:37:29.144 "abort": false, 00:37:29.144 "seek_hole": true, 00:37:29.144 "seek_data": true, 00:37:29.144 "copy": false, 00:37:29.144 "nvme_iov_md": false 00:37:29.144 }, 00:37:29.144 "driver_specific": { 00:37:29.144 "lvol": { 00:37:29.144 "lvol_store_uuid": "2e60ca8c-73cc-42aa-ab1f-c239f3ed0592", 00:37:29.144 "base_bdev": "Nvme0n1", 00:37:29.144 "thin_provision": true, 00:37:29.144 "num_allocated_clusters": 0, 00:37:29.144 "snapshot": false, 00:37:29.144 "clone": false, 00:37:29.144 "esnap_clone": false 00:37:29.144 } 00:37:29.144 } 00:37:29.144 } 00:37:29.144 ] 00:37:29.144 16:54:25 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:29.144 16:54:25 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:37:29.144 16:54:25 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:37:29.403 [2024-07-24 16:54:26.208977] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:29.403 COMP_lvs0/lv0 00:37:29.403 16:54:26 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:29.403 16:54:26 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:29.403 16:54:26 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:29.403 16:54:26 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:29.403 16:54:26 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:29.403 16:54:26 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:29.403 16:54:26 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:29.662 16:54:26 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:29.922 [ 00:37:29.922 { 00:37:29.922 "name": "COMP_lvs0/lv0", 00:37:29.922 "aliases": [ 00:37:29.922 "cbcfea89-f2da-5989-a3c4-5e15553fa102" 00:37:29.922 ], 00:37:29.922 "product_name": "compress", 00:37:29.922 "block_size": 512, 00:37:29.922 "num_blocks": 200704, 00:37:29.922 "uuid": "cbcfea89-f2da-5989-a3c4-5e15553fa102", 00:37:29.922 "assigned_rate_limits": { 00:37:29.922 "rw_ios_per_sec": 0, 00:37:29.922 "rw_mbytes_per_sec": 0, 00:37:29.922 "r_mbytes_per_sec": 0, 00:37:29.922 "w_mbytes_per_sec": 0 00:37:29.922 }, 00:37:29.922 "claimed": false, 00:37:29.922 "zoned": false, 00:37:29.922 "supported_io_types": { 00:37:29.922 "read": true, 00:37:29.922 "write": true, 00:37:29.922 "unmap": false, 00:37:29.922 "flush": false, 00:37:29.922 "reset": false, 00:37:29.922 "nvme_admin": false, 00:37:29.922 "nvme_io": false, 00:37:29.922 "nvme_io_md": false, 00:37:29.922 "write_zeroes": true, 00:37:29.922 "zcopy": false, 00:37:29.922 "get_zone_info": false, 00:37:29.922 "zone_management": false, 00:37:29.922 "zone_append": false, 00:37:29.922 "compare": false, 00:37:29.922 "compare_and_write": false, 00:37:29.922 "abort": false, 00:37:29.922 "seek_hole": false, 00:37:29.922 "seek_data": false, 00:37:29.922 "copy": false, 00:37:29.922 "nvme_iov_md": false 00:37:29.922 }, 00:37:29.922 "driver_specific": { 00:37:29.922 "compress": { 00:37:29.922 "name": "COMP_lvs0/lv0", 00:37:29.922 "base_bdev_name": "0145749e-f718-4ef8-b717-d0606b47b658", 00:37:29.922 "pm_path": "/tmp/pmem/2bdce874-b0a1-49f5-85e3-e9caa03d9cc7" 00:37:29.922 } 00:37:29.922 } 00:37:29.922 } 00:37:29.922 ] 00:37:29.922 16:54:26 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:29.922 16:54:26 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:30.181 Running I/O for 3 seconds... 00:37:33.468 00:37:33.468 Latency(us) 00:37:33.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.468 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:33.468 Verification LBA range: start 0x0 length 0x3100 00:37:33.468 COMP_lvs0/lv0 : 3.00 3244.44 12.67 0.00 0.00 9809.10 62.26 15204.35 00:37:33.468 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:33.468 Verification LBA range: start 0x3100 length 0x3100 00:37:33.468 COMP_lvs0/lv0 : 3.01 3277.46 12.80 0.00 0.00 9710.63 61.44 15414.07 00:37:33.468 =================================================================================================================== 00:37:33.468 Total : 6521.90 25.48 0.00 0.00 9759.59 61.44 15414.07 00:37:33.468 0 00:37:33.468 16:54:29 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:33.469 16:54:29 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:33.469 16:54:30 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:33.727 16:54:30 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:33.727 16:54:30 compress_isal -- compress/compress.sh@78 -- # killprocess 1850205 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1850205 ']' 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1850205 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@955 -- # uname 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1850205 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1850205' 00:37:33.727 killing process with pid 1850205 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@969 -- # kill 1850205 00:37:33.727 Received shutdown signal, test time was about 3.000000 seconds 00:37:33.727 00:37:33.727 Latency(us) 00:37:33.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.727 =================================================================================================================== 00:37:33.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:33.727 16:54:30 compress_isal -- common/autotest_common.sh@974 -- # wait 1850205 00:37:37.946 16:54:34 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:37:37.946 16:54:34 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:37.946 16:54:34 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1852604 00:37:37.946 16:54:34 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:37.946 16:54:34 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:37.946 16:54:34 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1852604 00:37:37.946 16:54:34 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1852604 ']' 00:37:37.946 16:54:34 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.946 16:54:34 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:37.946 16:54:34 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.946 16:54:34 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:37.946 16:54:34 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:37.946 [2024-07-24 16:54:34.441428] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:37:37.946 [2024-07-24 16:54:34.441555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852604 ] 00:37:37.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.946 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:37.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.946 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:37.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.946 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:37.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.946 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:37.947 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:37.947 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:37.947 [2024-07-24 16:54:34.655816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:38.206 [2024-07-24 16:54:34.934492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.206 [2024-07-24 16:54:34.934495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:38.775 16:54:35 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:38.775 16:54:35 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:38.775 16:54:35 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:37:38.775 16:54:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:38.775 16:54:35 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:42.062 16:54:38 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:42.062 16:54:38 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:42.322 [ 00:37:42.322 { 00:37:42.322 "name": "Nvme0n1", 00:37:42.322 "aliases": [ 00:37:42.322 "23ce4bd9-90e8-43b3-a18f-908a11d34507" 00:37:42.322 ], 00:37:42.322 "product_name": "NVMe disk", 00:37:42.322 "block_size": 512, 00:37:42.322 "num_blocks": 3907029168, 00:37:42.322 "uuid": "23ce4bd9-90e8-43b3-a18f-908a11d34507", 00:37:42.322 "assigned_rate_limits": { 00:37:42.322 "rw_ios_per_sec": 0, 00:37:42.322 "rw_mbytes_per_sec": 0, 00:37:42.322 "r_mbytes_per_sec": 0, 00:37:42.322 "w_mbytes_per_sec": 0 00:37:42.322 }, 00:37:42.322 "claimed": false, 00:37:42.322 "zoned": false, 00:37:42.322 "supported_io_types": { 00:37:42.322 "read": true, 00:37:42.322 "write": true, 00:37:42.322 "unmap": true, 00:37:42.322 "flush": true, 00:37:42.322 "reset": true, 00:37:42.322 "nvme_admin": true, 00:37:42.322 "nvme_io": true, 00:37:42.322 "nvme_io_md": false, 00:37:42.322 "write_zeroes": true, 00:37:42.322 "zcopy": false, 00:37:42.322 "get_zone_info": false, 00:37:42.322 "zone_management": false, 00:37:42.322 "zone_append": false, 00:37:42.322 "compare": false, 00:37:42.322 "compare_and_write": false, 00:37:42.322 "abort": true, 00:37:42.322 "seek_hole": false, 00:37:42.322 "seek_data": false, 00:37:42.322 "copy": false, 00:37:42.322 "nvme_iov_md": false 00:37:42.322 }, 00:37:42.322 "driver_specific": { 00:37:42.322 "nvme": [ 00:37:42.322 { 00:37:42.322 "pci_address": "0000:d8:00.0", 00:37:42.322 "trid": { 00:37:42.322 "trtype": "PCIe", 00:37:42.322 "traddr": "0000:d8:00.0" 00:37:42.322 }, 00:37:42.322 "ctrlr_data": { 00:37:42.322 "cntlid": 0, 00:37:42.322 "vendor_id": "0x8086", 00:37:42.322 "model_number": "INTEL SSDPE2KX020T8", 00:37:42.322 "serial_number": "BTLJ125505KA2P0BGN", 00:37:42.322 "firmware_revision": "VDV10170", 00:37:42.322 "oacs": { 00:37:42.322 "security": 0, 00:37:42.322 "format": 1, 00:37:42.322 "firmware": 1, 00:37:42.322 "ns_manage": 1 00:37:42.322 }, 00:37:42.322 "multi_ctrlr": false, 00:37:42.322 "ana_reporting": false 00:37:42.322 }, 00:37:42.322 "vs": { 00:37:42.322 "nvme_version": "1.2" 00:37:42.322 }, 00:37:42.322 "ns_data": { 00:37:42.322 "id": 1, 00:37:42.322 "can_share": false 00:37:42.322 } 00:37:42.322 } 00:37:42.322 ], 00:37:42.322 "mp_policy": "active_passive" 00:37:42.322 } 00:37:42.322 } 00:37:42.322 ] 00:37:42.322 16:54:39 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:42.322 16:54:39 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:43.700 7ebd4bea-555c-4463-a7d7-28c2642246b1 00:37:43.700 16:54:40 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:43.700 3694357c-2e5f-4c30-925c-f94b054fed79 00:37:43.700 16:54:40 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:43.700 16:54:40 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:43.700 16:54:40 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:43.700 16:54:40 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:43.700 16:54:40 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:43.700 16:54:40 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:43.700 16:54:40 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:43.960 16:54:40 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:44.219 [ 00:37:44.219 { 00:37:44.219 "name": "3694357c-2e5f-4c30-925c-f94b054fed79", 00:37:44.219 "aliases": [ 00:37:44.219 "lvs0/lv0" 00:37:44.219 ], 00:37:44.219 "product_name": "Logical Volume", 00:37:44.219 "block_size": 512, 00:37:44.219 "num_blocks": 204800, 00:37:44.219 "uuid": "3694357c-2e5f-4c30-925c-f94b054fed79", 00:37:44.219 "assigned_rate_limits": { 00:37:44.219 "rw_ios_per_sec": 0, 00:37:44.219 "rw_mbytes_per_sec": 0, 00:37:44.219 "r_mbytes_per_sec": 0, 00:37:44.219 "w_mbytes_per_sec": 0 00:37:44.219 }, 00:37:44.219 "claimed": false, 00:37:44.219 "zoned": false, 00:37:44.219 "supported_io_types": { 00:37:44.219 "read": true, 00:37:44.219 "write": true, 00:37:44.219 "unmap": true, 00:37:44.219 "flush": false, 00:37:44.219 "reset": true, 00:37:44.219 "nvme_admin": false, 00:37:44.219 "nvme_io": false, 00:37:44.219 "nvme_io_md": false, 00:37:44.219 "write_zeroes": true, 00:37:44.219 "zcopy": false, 00:37:44.219 "get_zone_info": false, 00:37:44.219 "zone_management": false, 00:37:44.219 "zone_append": false, 00:37:44.219 "compare": false, 00:37:44.219 "compare_and_write": false, 00:37:44.219 "abort": false, 00:37:44.219 "seek_hole": true, 00:37:44.219 "seek_data": true, 00:37:44.219 "copy": false, 00:37:44.219 "nvme_iov_md": false 00:37:44.219 }, 00:37:44.219 "driver_specific": { 00:37:44.219 "lvol": { 00:37:44.219 "lvol_store_uuid": "7ebd4bea-555c-4463-a7d7-28c2642246b1", 00:37:44.219 "base_bdev": "Nvme0n1", 00:37:44.219 "thin_provision": true, 00:37:44.219 "num_allocated_clusters": 0, 00:37:44.219 "snapshot": false, 00:37:44.219 "clone": false, 00:37:44.219 "esnap_clone": false 00:37:44.219 } 00:37:44.219 } 00:37:44.219 } 00:37:44.219 ] 00:37:44.219 16:54:41 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:44.219 16:54:41 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:37:44.219 16:54:41 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:37:44.478 [2024-07-24 16:54:41.251369] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:44.478 COMP_lvs0/lv0 00:37:44.478 16:54:41 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:44.478 16:54:41 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:44.478 16:54:41 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:44.478 16:54:41 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:44.478 16:54:41 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:44.478 16:54:41 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:44.478 16:54:41 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:44.737 16:54:41 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:44.997 [ 00:37:44.997 { 00:37:44.997 "name": "COMP_lvs0/lv0", 00:37:44.997 "aliases": [ 00:37:44.997 "35fa352a-be68-5821-b37d-ad9fb8137dbc" 00:37:44.997 ], 00:37:44.997 "product_name": "compress", 00:37:44.997 "block_size": 4096, 00:37:44.997 "num_blocks": 25088, 00:37:44.997 "uuid": "35fa352a-be68-5821-b37d-ad9fb8137dbc", 00:37:44.997 "assigned_rate_limits": { 00:37:44.997 "rw_ios_per_sec": 0, 00:37:44.997 "rw_mbytes_per_sec": 0, 00:37:44.997 "r_mbytes_per_sec": 0, 00:37:44.997 "w_mbytes_per_sec": 0 00:37:44.997 }, 00:37:44.997 "claimed": false, 00:37:44.997 "zoned": false, 00:37:44.997 "supported_io_types": { 00:37:44.997 "read": true, 00:37:44.997 "write": true, 00:37:44.997 "unmap": false, 00:37:44.997 "flush": false, 00:37:44.997 "reset": false, 00:37:44.997 "nvme_admin": false, 00:37:44.997 "nvme_io": false, 00:37:44.997 "nvme_io_md": false, 00:37:44.997 "write_zeroes": true, 00:37:44.997 "zcopy": false, 00:37:44.997 "get_zone_info": false, 00:37:44.997 "zone_management": false, 00:37:44.997 "zone_append": false, 00:37:44.997 "compare": false, 00:37:44.997 "compare_and_write": false, 00:37:44.997 "abort": false, 00:37:44.997 "seek_hole": false, 00:37:44.997 "seek_data": false, 00:37:44.997 "copy": false, 00:37:44.997 "nvme_iov_md": false 00:37:44.997 }, 00:37:44.997 "driver_specific": { 00:37:44.997 "compress": { 00:37:44.997 "name": "COMP_lvs0/lv0", 00:37:44.997 "base_bdev_name": "3694357c-2e5f-4c30-925c-f94b054fed79", 00:37:44.997 "pm_path": "/tmp/pmem/e3012440-11be-4241-b75b-7addca897070" 00:37:44.997 } 00:37:44.997 } 00:37:44.997 } 00:37:44.997 ] 00:37:44.997 16:54:41 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:44.997 16:54:41 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:44.997 Running I/O for 3 seconds... 00:37:48.284 00:37:48.284 Latency(us) 00:37:48.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.284 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:48.284 Verification LBA range: start 0x0 length 0x3100 00:37:48.284 COMP_lvs0/lv0 : 3.01 3210.14 12.54 0.00 0.00 9903.56 62.67 17196.65 00:37:48.285 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:48.285 Verification LBA range: start 0x3100 length 0x3100 00:37:48.285 COMP_lvs0/lv0 : 3.01 3215.88 12.56 0.00 0.00 9895.26 61.85 17825.79 00:37:48.285 =================================================================================================================== 00:37:48.285 Total : 6426.02 25.10 0.00 0.00 9899.41 61.85 17825.79 00:37:48.285 0 00:37:48.285 16:54:44 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:48.285 16:54:44 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:48.543 16:54:45 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:48.543 16:54:45 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:48.543 16:54:45 compress_isal -- compress/compress.sh@78 -- # killprocess 1852604 00:37:48.543 16:54:45 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1852604 ']' 00:37:48.543 16:54:45 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1852604 00:37:48.543 16:54:45 compress_isal -- common/autotest_common.sh@955 -- # uname 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1852604 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1852604' 00:37:48.803 killing process with pid 1852604 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@969 -- # kill 1852604 00:37:48.803 Received shutdown signal, test time was about 3.000000 seconds 00:37:48.803 00:37:48.803 Latency(us) 00:37:48.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.803 =================================================================================================================== 00:37:48.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:48.803 16:54:45 compress_isal -- common/autotest_common.sh@974 -- # wait 1852604 00:37:52.993 16:54:49 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:37:52.993 16:54:49 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:52.993 16:54:49 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=1855002 00:37:52.993 16:54:49 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:52.993 16:54:49 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:37:52.993 16:54:49 compress_isal -- compress/compress.sh@57 -- # waitforlisten 1855002 00:37:52.993 16:54:49 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1855002 ']' 00:37:52.993 16:54:49 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.993 16:54:49 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:52.993 16:54:49 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.993 16:54:49 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:52.993 16:54:49 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:52.993 [2024-07-24 16:54:49.474879] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:37:52.993 [2024-07-24 16:54:49.474970] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855002 ] 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:52.993 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:52.993 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:52.993 [2024-07-24 16:54:49.676569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:53.252 [2024-07-24 16:54:49.954083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:53.252 [2024-07-24 16:54:49.954098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.252 [2024-07-24 16:54:49.954103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:53.828 16:54:50 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:53.828 16:54:50 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:53.828 16:54:50 compress_isal -- compress/compress.sh@58 -- # create_vols 00:37:53.828 16:54:50 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:53.828 16:54:50 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:57.153 16:54:53 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.153 16:54:53 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:57.413 [ 00:37:57.413 { 00:37:57.413 "name": "Nvme0n1", 00:37:57.413 "aliases": [ 00:37:57.413 "8b23a22f-8c16-4f2c-b797-a9b7ac0ad2e1" 00:37:57.413 ], 00:37:57.413 "product_name": "NVMe disk", 00:37:57.413 "block_size": 512, 00:37:57.413 "num_blocks": 3907029168, 00:37:57.413 "uuid": "8b23a22f-8c16-4f2c-b797-a9b7ac0ad2e1", 00:37:57.413 "assigned_rate_limits": { 00:37:57.413 "rw_ios_per_sec": 0, 00:37:57.413 "rw_mbytes_per_sec": 0, 00:37:57.413 "r_mbytes_per_sec": 0, 00:37:57.413 "w_mbytes_per_sec": 0 00:37:57.413 }, 00:37:57.413 "claimed": false, 00:37:57.413 "zoned": false, 00:37:57.413 "supported_io_types": { 00:37:57.413 "read": true, 00:37:57.413 "write": true, 00:37:57.413 "unmap": true, 00:37:57.413 "flush": true, 00:37:57.413 "reset": true, 00:37:57.413 "nvme_admin": true, 00:37:57.413 "nvme_io": true, 00:37:57.413 "nvme_io_md": false, 00:37:57.413 "write_zeroes": true, 00:37:57.413 "zcopy": false, 00:37:57.413 "get_zone_info": false, 00:37:57.413 "zone_management": false, 00:37:57.413 "zone_append": false, 00:37:57.413 "compare": false, 00:37:57.413 "compare_and_write": false, 00:37:57.413 "abort": true, 00:37:57.413 "seek_hole": false, 00:37:57.413 "seek_data": false, 00:37:57.413 "copy": false, 00:37:57.413 "nvme_iov_md": false 00:37:57.413 }, 00:37:57.413 "driver_specific": { 00:37:57.413 "nvme": [ 00:37:57.413 { 00:37:57.413 "pci_address": "0000:d8:00.0", 00:37:57.413 "trid": { 00:37:57.413 "trtype": "PCIe", 00:37:57.413 "traddr": "0000:d8:00.0" 00:37:57.413 }, 00:37:57.413 "ctrlr_data": { 00:37:57.413 "cntlid": 0, 00:37:57.413 "vendor_id": "0x8086", 00:37:57.413 "model_number": "INTEL SSDPE2KX020T8", 00:37:57.413 "serial_number": "BTLJ125505KA2P0BGN", 00:37:57.413 "firmware_revision": "VDV10170", 00:37:57.413 "oacs": { 00:37:57.413 "security": 0, 00:37:57.413 "format": 1, 00:37:57.413 "firmware": 1, 00:37:57.413 "ns_manage": 1 00:37:57.413 }, 00:37:57.413 "multi_ctrlr": false, 00:37:57.413 "ana_reporting": false 00:37:57.413 }, 00:37:57.413 "vs": { 00:37:57.413 "nvme_version": "1.2" 00:37:57.413 }, 00:37:57.413 "ns_data": { 00:37:57.413 "id": 1, 00:37:57.413 "can_share": false 00:37:57.413 } 00:37:57.413 } 00:37:57.413 ], 00:37:57.413 "mp_policy": "active_passive" 00:37:57.413 } 00:37:57.413 } 00:37:57.413 ] 00:37:57.413 16:54:54 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:57.413 16:54:54 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:58.790 529d4ed5-a52f-4f5e-a4c8-f612560d8468 00:37:58.790 16:54:55 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:58.790 39e745d2-9852-4035-9e06-98410797afad 00:37:58.790 16:54:55 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:58.790 16:54:55 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:58.790 16:54:55 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:58.790 16:54:55 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:58.790 16:54:55 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:58.790 16:54:55 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:58.790 16:54:55 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:59.049 16:54:55 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:59.049 [ 00:37:59.049 { 00:37:59.049 "name": "39e745d2-9852-4035-9e06-98410797afad", 00:37:59.049 "aliases": [ 00:37:59.049 "lvs0/lv0" 00:37:59.049 ], 00:37:59.049 "product_name": "Logical Volume", 00:37:59.049 "block_size": 512, 00:37:59.049 "num_blocks": 204800, 00:37:59.049 "uuid": "39e745d2-9852-4035-9e06-98410797afad", 00:37:59.049 "assigned_rate_limits": { 00:37:59.049 "rw_ios_per_sec": 0, 00:37:59.049 "rw_mbytes_per_sec": 0, 00:37:59.049 "r_mbytes_per_sec": 0, 00:37:59.049 "w_mbytes_per_sec": 0 00:37:59.049 }, 00:37:59.049 "claimed": false, 00:37:59.049 "zoned": false, 00:37:59.049 "supported_io_types": { 00:37:59.049 "read": true, 00:37:59.049 "write": true, 00:37:59.049 "unmap": true, 00:37:59.049 "flush": false, 00:37:59.049 "reset": true, 00:37:59.049 "nvme_admin": false, 00:37:59.049 "nvme_io": false, 00:37:59.049 "nvme_io_md": false, 00:37:59.049 "write_zeroes": true, 00:37:59.049 "zcopy": false, 00:37:59.049 "get_zone_info": false, 00:37:59.049 "zone_management": false, 00:37:59.049 "zone_append": false, 00:37:59.049 "compare": false, 00:37:59.049 "compare_and_write": false, 00:37:59.049 "abort": false, 00:37:59.049 "seek_hole": true, 00:37:59.049 "seek_data": true, 00:37:59.049 "copy": false, 00:37:59.049 "nvme_iov_md": false 00:37:59.049 }, 00:37:59.049 "driver_specific": { 00:37:59.049 "lvol": { 00:37:59.049 "lvol_store_uuid": "529d4ed5-a52f-4f5e-a4c8-f612560d8468", 00:37:59.049 "base_bdev": "Nvme0n1", 00:37:59.049 "thin_provision": true, 00:37:59.049 "num_allocated_clusters": 0, 00:37:59.049 "snapshot": false, 00:37:59.049 "clone": false, 00:37:59.049 "esnap_clone": false 00:37:59.049 } 00:37:59.049 } 00:37:59.049 } 00:37:59.049 ] 00:37:59.049 16:54:55 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:59.049 16:54:55 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:37:59.049 16:54:55 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:37:59.309 [2024-07-24 16:54:55.995607] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:59.309 COMP_lvs0/lv0 00:37:59.309 16:54:56 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:59.309 16:54:56 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:59.309 16:54:56 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:59.309 16:54:56 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:59.309 16:54:56 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:59.309 16:54:56 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:59.309 16:54:56 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:59.568 16:54:56 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:59.568 [ 00:37:59.568 { 00:37:59.568 "name": "COMP_lvs0/lv0", 00:37:59.568 "aliases": [ 00:37:59.568 "9dd04870-94dd-5153-a2f5-434e0ea21272" 00:37:59.568 ], 00:37:59.568 "product_name": "compress", 00:37:59.568 "block_size": 512, 00:37:59.568 "num_blocks": 200704, 00:37:59.568 "uuid": "9dd04870-94dd-5153-a2f5-434e0ea21272", 00:37:59.568 "assigned_rate_limits": { 00:37:59.568 "rw_ios_per_sec": 0, 00:37:59.568 "rw_mbytes_per_sec": 0, 00:37:59.568 "r_mbytes_per_sec": 0, 00:37:59.568 "w_mbytes_per_sec": 0 00:37:59.568 }, 00:37:59.568 "claimed": false, 00:37:59.568 "zoned": false, 00:37:59.568 "supported_io_types": { 00:37:59.568 "read": true, 00:37:59.568 "write": true, 00:37:59.568 "unmap": false, 00:37:59.568 "flush": false, 00:37:59.568 "reset": false, 00:37:59.568 "nvme_admin": false, 00:37:59.568 "nvme_io": false, 00:37:59.568 "nvme_io_md": false, 00:37:59.568 "write_zeroes": true, 00:37:59.568 "zcopy": false, 00:37:59.568 "get_zone_info": false, 00:37:59.568 "zone_management": false, 00:37:59.568 "zone_append": false, 00:37:59.568 "compare": false, 00:37:59.568 "compare_and_write": false, 00:37:59.568 "abort": false, 00:37:59.568 "seek_hole": false, 00:37:59.568 "seek_data": false, 00:37:59.568 "copy": false, 00:37:59.568 "nvme_iov_md": false 00:37:59.568 }, 00:37:59.568 "driver_specific": { 00:37:59.568 "compress": { 00:37:59.568 "name": "COMP_lvs0/lv0", 00:37:59.568 "base_bdev_name": "39e745d2-9852-4035-9e06-98410797afad", 00:37:59.568 "pm_path": "/tmp/pmem/2fbfe17c-a5e9-4086-aceb-befc4b53b7f9" 00:37:59.568 } 00:37:59.568 } 00:37:59.568 } 00:37:59.568 ] 00:37:59.568 16:54:56 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:59.568 16:54:56 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:37:59.828 I/O targets: 00:37:59.828 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:37:59.828 00:37:59.828 00:37:59.828 CUnit - A unit testing framework for C - Version 2.1-3 00:37:59.828 http://cunit.sourceforge.net/ 00:37:59.828 00:37:59.828 00:37:59.828 Suite: bdevio tests on: COMP_lvs0/lv0 00:37:59.828 Test: blockdev write read block ...passed 00:37:59.828 Test: blockdev write zeroes read block ...passed 00:37:59.828 Test: blockdev write zeroes read no split ...passed 00:37:59.828 Test: blockdev write zeroes read split ...passed 00:37:59.828 Test: blockdev write zeroes read split partial ...passed 00:37:59.828 Test: blockdev reset ...[2024-07-24 16:54:56.618343] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:37:59.828 passed 00:37:59.828 Test: blockdev write read 8 blocks ...passed 00:37:59.828 Test: blockdev write read size > 128k ...passed 00:37:59.828 Test: blockdev write read invalid size ...passed 00:37:59.828 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:37:59.828 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:37:59.828 Test: blockdev write read max offset ...passed 00:37:59.828 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:37:59.828 Test: blockdev writev readv 8 blocks ...passed 00:37:59.828 Test: blockdev writev readv 30 x 1block ...passed 00:37:59.828 Test: blockdev writev readv block ...passed 00:37:59.828 Test: blockdev writev readv size > 128k ...passed 00:37:59.828 Test: blockdev writev readv size > 128k in two iovs ...passed 00:37:59.828 Test: blockdev comparev and writev ...passed 00:37:59.828 Test: blockdev nvme passthru rw ...passed 00:37:59.828 Test: blockdev nvme passthru vendor specific ...passed 00:37:59.828 Test: blockdev nvme admin passthru ...passed 00:37:59.828 Test: blockdev copy ...passed 00:37:59.828 00:37:59.828 Run Summary: Type Total Ran Passed Failed Inactive 00:37:59.828 suites 1 1 n/a 0 0 00:37:59.828 tests 23 23 23 0 0 00:37:59.828 asserts 130 130 130 0 n/a 00:37:59.828 00:37:59.828 Elapsed time = 0.395 seconds 00:37:59.828 0 00:37:59.828 16:54:56 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:37:59.828 16:54:56 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:00.087 16:54:56 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:00.655 16:54:57 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:38:00.655 16:54:57 compress_isal -- compress/compress.sh@62 -- # killprocess 1855002 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1855002 ']' 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1855002 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@955 -- # uname 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1855002 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1855002' 00:38:00.655 killing process with pid 1855002 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@969 -- # kill 1855002 00:38:00.655 16:54:57 compress_isal -- common/autotest_common.sh@974 -- # wait 1855002 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=1857020 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 00:38:04.849 16:55:01 compress_isal -- compress/compress.sh@73 -- # waitforlisten 1857020 00:38:04.849 16:55:01 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1857020 ']' 00:38:04.849 16:55:01 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.849 16:55:01 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:04.849 16:55:01 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.849 16:55:01 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:04.849 16:55:01 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:04.849 [2024-07-24 16:55:01.333338] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:38:04.849 [2024-07-24 16:55:01.333452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857020 ] 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.849 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:04.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:04.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:04.850 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:04.850 [2024-07-24 16:55:01.520551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:05.109 [2024-07-24 16:55:01.796158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.109 [2024-07-24 16:55:01.796179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:05.678 16:55:02 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:05.678 16:55:02 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:38:05.678 16:55:02 compress_isal -- compress/compress.sh@74 -- # create_vols 00:38:05.678 16:55:02 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:05.678 16:55:02 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:08.959 16:55:05 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:38:08.959 [ 00:38:08.959 { 00:38:08.959 "name": "Nvme0n1", 00:38:08.959 "aliases": [ 00:38:08.959 "7ca7c23d-b4c3-41ac-9cd3-7577f6bca01c" 00:38:08.959 ], 00:38:08.959 "product_name": "NVMe disk", 00:38:08.959 "block_size": 512, 00:38:08.959 "num_blocks": 3907029168, 00:38:08.959 "uuid": "7ca7c23d-b4c3-41ac-9cd3-7577f6bca01c", 00:38:08.959 "assigned_rate_limits": { 00:38:08.959 "rw_ios_per_sec": 0, 00:38:08.959 "rw_mbytes_per_sec": 0, 00:38:08.959 "r_mbytes_per_sec": 0, 00:38:08.959 "w_mbytes_per_sec": 0 00:38:08.959 }, 00:38:08.959 "claimed": false, 00:38:08.959 "zoned": false, 00:38:08.959 "supported_io_types": { 00:38:08.959 "read": true, 00:38:08.959 "write": true, 00:38:08.959 "unmap": true, 00:38:08.959 "flush": true, 00:38:08.959 "reset": true, 00:38:08.959 "nvme_admin": true, 00:38:08.959 "nvme_io": true, 00:38:08.959 "nvme_io_md": false, 00:38:08.959 "write_zeroes": true, 00:38:08.959 "zcopy": false, 00:38:08.959 "get_zone_info": false, 00:38:08.959 "zone_management": false, 00:38:08.959 "zone_append": false, 00:38:08.959 "compare": false, 00:38:08.959 "compare_and_write": false, 00:38:08.959 "abort": true, 00:38:08.959 "seek_hole": false, 00:38:08.959 "seek_data": false, 00:38:08.959 "copy": false, 00:38:08.959 "nvme_iov_md": false 00:38:08.959 }, 00:38:08.959 "driver_specific": { 00:38:08.959 "nvme": [ 00:38:08.959 { 00:38:08.959 "pci_address": "0000:d8:00.0", 00:38:08.959 "trid": { 00:38:08.959 "trtype": "PCIe", 00:38:08.959 "traddr": "0000:d8:00.0" 00:38:08.959 }, 00:38:08.959 "ctrlr_data": { 00:38:08.959 "cntlid": 0, 00:38:08.959 "vendor_id": "0x8086", 00:38:08.959 "model_number": "INTEL SSDPE2KX020T8", 00:38:08.959 "serial_number": "BTLJ125505KA2P0BGN", 00:38:08.959 "firmware_revision": "VDV10170", 00:38:08.959 "oacs": { 00:38:08.959 "security": 0, 00:38:08.959 "format": 1, 00:38:08.959 "firmware": 1, 00:38:08.959 "ns_manage": 1 00:38:08.959 }, 00:38:08.959 "multi_ctrlr": false, 00:38:08.959 "ana_reporting": false 00:38:08.959 }, 00:38:08.959 "vs": { 00:38:08.959 "nvme_version": "1.2" 00:38:08.959 }, 00:38:08.959 "ns_data": { 00:38:08.959 "id": 1, 00:38:08.959 "can_share": false 00:38:08.959 } 00:38:08.959 } 00:38:08.959 ], 00:38:08.959 "mp_policy": "active_passive" 00:38:08.959 } 00:38:08.959 } 00:38:08.959 ] 00:38:08.959 16:55:05 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:08.959 16:55:05 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:38:10.333 498945cd-4738-48fa-a34f-029792b016f1 00:38:10.333 16:55:06 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:38:10.333 98a1c9ee-c6be-4712-b5ac-83e12be8ca3d 00:38:10.333 16:55:07 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:38:10.333 16:55:07 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:38:10.333 16:55:07 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:10.333 16:55:07 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:10.333 16:55:07 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:10.333 16:55:07 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:10.333 16:55:07 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:10.592 16:55:07 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:38:10.850 [ 00:38:10.850 { 00:38:10.850 "name": "98a1c9ee-c6be-4712-b5ac-83e12be8ca3d", 00:38:10.850 "aliases": [ 00:38:10.850 "lvs0/lv0" 00:38:10.850 ], 00:38:10.850 "product_name": "Logical Volume", 00:38:10.850 "block_size": 512, 00:38:10.850 "num_blocks": 204800, 00:38:10.850 "uuid": "98a1c9ee-c6be-4712-b5ac-83e12be8ca3d", 00:38:10.850 "assigned_rate_limits": { 00:38:10.850 "rw_ios_per_sec": 0, 00:38:10.850 "rw_mbytes_per_sec": 0, 00:38:10.850 "r_mbytes_per_sec": 0, 00:38:10.850 "w_mbytes_per_sec": 0 00:38:10.850 }, 00:38:10.850 "claimed": false, 00:38:10.850 "zoned": false, 00:38:10.850 "supported_io_types": { 00:38:10.850 "read": true, 00:38:10.850 "write": true, 00:38:10.850 "unmap": true, 00:38:10.850 "flush": false, 00:38:10.850 "reset": true, 00:38:10.850 "nvme_admin": false, 00:38:10.850 "nvme_io": false, 00:38:10.850 "nvme_io_md": false, 00:38:10.850 "write_zeroes": true, 00:38:10.850 "zcopy": false, 00:38:10.850 "get_zone_info": false, 00:38:10.850 "zone_management": false, 00:38:10.850 "zone_append": false, 00:38:10.850 "compare": false, 00:38:10.850 "compare_and_write": false, 00:38:10.850 "abort": false, 00:38:10.850 "seek_hole": true, 00:38:10.850 "seek_data": true, 00:38:10.850 "copy": false, 00:38:10.850 "nvme_iov_md": false 00:38:10.850 }, 00:38:10.850 "driver_specific": { 00:38:10.850 "lvol": { 00:38:10.850 "lvol_store_uuid": "498945cd-4738-48fa-a34f-029792b016f1", 00:38:10.850 "base_bdev": "Nvme0n1", 00:38:10.850 "thin_provision": true, 00:38:10.850 "num_allocated_clusters": 0, 00:38:10.850 "snapshot": false, 00:38:10.850 "clone": false, 00:38:10.850 "esnap_clone": false 00:38:10.850 } 00:38:10.850 } 00:38:10.850 } 00:38:10.850 ] 00:38:10.851 16:55:07 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:10.851 16:55:07 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:38:10.851 16:55:07 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:38:11.109 [2024-07-24 16:55:07.838918] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:38:11.109 COMP_lvs0/lv0 00:38:11.109 16:55:07 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:38:11.109 16:55:07 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:38:11.109 16:55:07 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:11.109 16:55:07 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:11.109 16:55:07 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:11.109 16:55:07 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:11.109 16:55:07 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:11.368 16:55:08 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:38:11.626 [ 00:38:11.626 { 00:38:11.626 "name": "COMP_lvs0/lv0", 00:38:11.626 "aliases": [ 00:38:11.626 "caa027e7-4898-52b0-8529-8b07fbced5f2" 00:38:11.626 ], 00:38:11.626 "product_name": "compress", 00:38:11.626 "block_size": 512, 00:38:11.626 "num_blocks": 200704, 00:38:11.626 "uuid": "caa027e7-4898-52b0-8529-8b07fbced5f2", 00:38:11.626 "assigned_rate_limits": { 00:38:11.626 "rw_ios_per_sec": 0, 00:38:11.626 "rw_mbytes_per_sec": 0, 00:38:11.626 "r_mbytes_per_sec": 0, 00:38:11.626 "w_mbytes_per_sec": 0 00:38:11.626 }, 00:38:11.626 "claimed": false, 00:38:11.626 "zoned": false, 00:38:11.626 "supported_io_types": { 00:38:11.626 "read": true, 00:38:11.626 "write": true, 00:38:11.626 "unmap": false, 00:38:11.626 "flush": false, 00:38:11.626 "reset": false, 00:38:11.626 "nvme_admin": false, 00:38:11.626 "nvme_io": false, 00:38:11.626 "nvme_io_md": false, 00:38:11.626 "write_zeroes": true, 00:38:11.626 "zcopy": false, 00:38:11.626 "get_zone_info": false, 00:38:11.626 "zone_management": false, 00:38:11.626 "zone_append": false, 00:38:11.626 "compare": false, 00:38:11.626 "compare_and_write": false, 00:38:11.626 "abort": false, 00:38:11.626 "seek_hole": false, 00:38:11.626 "seek_data": false, 00:38:11.626 "copy": false, 00:38:11.626 "nvme_iov_md": false 00:38:11.626 }, 00:38:11.626 "driver_specific": { 00:38:11.626 "compress": { 00:38:11.626 "name": "COMP_lvs0/lv0", 00:38:11.626 "base_bdev_name": "98a1c9ee-c6be-4712-b5ac-83e12be8ca3d", 00:38:11.626 "pm_path": "/tmp/pmem/8832d2e5-94bc-4830-ad80-82973abb7978" 00:38:11.626 } 00:38:11.626 } 00:38:11.626 } 00:38:11.626 ] 00:38:11.626 16:55:08 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:11.626 16:55:08 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:11.626 Running I/O for 30 seconds... 00:38:43.769 00:38:43.769 Latency(us) 00:38:43.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.769 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:38:43.769 Verification LBA range: start 0x0 length 0xc40 00:38:43.769 COMP_lvs0/lv0 : 30.01 1373.12 21.45 0.00 0.00 46375.93 956.83 36071.01 00:38:43.769 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:38:43.769 Verification LBA range: start 0xc40 length 0xc40 00:38:43.769 COMP_lvs0/lv0 : 30.01 4584.55 71.63 0.00 0.00 13847.00 534.12 24326.96 00:38:43.769 =================================================================================================================== 00:38:43.769 Total : 5957.67 93.09 0.00 0.00 21344.63 534.12 36071.01 00:38:43.769 0 00:38:43.769 16:55:38 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:38:43.769 16:55:38 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:43.769 16:55:38 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:43.769 16:55:39 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:43.769 16:55:39 compress_isal -- compress/compress.sh@78 -- # killprocess 1857020 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1857020 ']' 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1857020 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@955 -- # uname 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1857020 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1857020' 00:38:43.769 killing process with pid 1857020 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@969 -- # kill 1857020 00:38:43.769 Received shutdown signal, test time was about 30.000000 seconds 00:38:43.769 00:38:43.769 Latency(us) 00:38:43.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.769 =================================================================================================================== 00:38:43.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:43.769 16:55:39 compress_isal -- common/autotest_common.sh@974 -- # wait 1857020 00:38:46.300 16:55:42 compress_isal -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:38:46.300 16:55:42 compress_isal -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:38:46.300 16:55:42 compress_isal -- compress/compress.sh@96 -- # NET_TYPE=virt 00:38:46.300 16:55:42 compress_isal -- compress/compress.sh@96 -- # nvmftestinit 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:46.300 16:55:42 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:46.300 16:55:42 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:46.300 16:55:42 compress_isal -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:46.300 Cannot find device "nvmf_tgt_br" 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@155 -- # true 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:46.300 Cannot find device "nvmf_tgt_br2" 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@156 -- # true 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:46.300 Cannot find device "nvmf_tgt_br" 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@158 -- # true 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:46.300 Cannot find device "nvmf_tgt_br2" 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@159 -- # true 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:46.300 16:55:43 compress_isal -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:46.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@162 -- # true 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:46.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@163 -- # true 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:46.301 16:55:43 compress_isal -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:46.561 16:55:43 compress_isal -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:46.562 16:55:43 compress_isal -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:46.562 16:55:43 compress_isal -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:46.562 16:55:43 compress_isal -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:46.562 16:55:43 compress_isal -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:46.562 16:55:43 compress_isal -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:46.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:38:46.822 00:38:46.822 --- 10.0.0.2 ping statistics --- 00:38:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.822 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:46.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:46.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:38:46.822 00:38:46.822 --- 10.0.0.3 ping statistics --- 00:38:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.822 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:46.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:38:46.822 00:38:46.822 --- 10.0.0.1 ping statistics --- 00:38:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.822 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@433 -- # return 0 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:46.822 16:55:43 compress_isal -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@481 -- # nvmfpid=1864057 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@482 -- # waitforlisten 1864057 00:38:46.822 16:55:43 compress_isal -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 1864057 ']' 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:46.822 16:55:43 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:46.822 [2024-07-24 16:55:43.672644] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:38:46.822 [2024-07-24 16:55:43.672759] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:47.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:47.082 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:47.082 [2024-07-24 16:55:43.906517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:47.651 [2024-07-24 16:55:44.205484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:47.651 [2024-07-24 16:55:44.205535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:47.651 [2024-07-24 16:55:44.205554] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:47.651 [2024-07-24 16:55:44.205569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:47.651 [2024-07-24 16:55:44.205585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:47.651 [2024-07-24 16:55:44.205673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.651 [2024-07-24 16:55:44.205710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.651 [2024-07-24 16:55:44.205716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:47.910 16:55:44 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:47.910 16:55:44 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:38:47.910 16:55:44 compress_isal -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:38:47.910 16:55:44 compress_isal -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:47.910 16:55:44 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:47.910 16:55:44 compress_isal -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:47.910 16:55:44 compress_isal -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:47.910 16:55:44 compress_isal -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:38:48.169 [2024-07-24 16:55:44.920212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:48.169 16:55:44 compress_isal -- compress/compress.sh@102 -- # create_vols 00:38:48.169 16:55:44 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:48.169 16:55:44 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:51.455 16:55:48 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:38:51.455 16:55:48 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:38:51.455 16:55:48 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:51.455 16:55:48 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:51.455 16:55:48 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:51.455 16:55:48 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:51.455 16:55:48 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:51.714 16:55:48 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:38:51.972 [ 00:38:51.972 { 00:38:51.972 "name": "Nvme0n1", 00:38:51.972 "aliases": [ 00:38:51.972 "4a57fc98-8a43-4971-982e-6f1b8bd989db" 00:38:51.972 ], 00:38:51.972 "product_name": "NVMe disk", 00:38:51.972 "block_size": 512, 00:38:51.972 "num_blocks": 3907029168, 00:38:51.972 "uuid": "4a57fc98-8a43-4971-982e-6f1b8bd989db", 00:38:51.972 "assigned_rate_limits": { 00:38:51.972 "rw_ios_per_sec": 0, 00:38:51.972 "rw_mbytes_per_sec": 0, 00:38:51.972 "r_mbytes_per_sec": 0, 00:38:51.972 "w_mbytes_per_sec": 0 00:38:51.972 }, 00:38:51.972 "claimed": false, 00:38:51.972 "zoned": false, 00:38:51.972 "supported_io_types": { 00:38:51.972 "read": true, 00:38:51.972 "write": true, 00:38:51.972 "unmap": true, 00:38:51.972 "flush": true, 00:38:51.972 "reset": true, 00:38:51.972 "nvme_admin": true, 00:38:51.972 "nvme_io": true, 00:38:51.972 "nvme_io_md": false, 00:38:51.972 "write_zeroes": true, 00:38:51.972 "zcopy": false, 00:38:51.972 "get_zone_info": false, 00:38:51.972 "zone_management": false, 00:38:51.972 "zone_append": false, 00:38:51.972 "compare": false, 00:38:51.972 "compare_and_write": false, 00:38:51.972 "abort": true, 00:38:51.972 "seek_hole": false, 00:38:51.972 "seek_data": false, 00:38:51.972 "copy": false, 00:38:51.972 "nvme_iov_md": false 00:38:51.972 }, 00:38:51.972 "driver_specific": { 00:38:51.972 "nvme": [ 00:38:51.972 { 00:38:51.972 "pci_address": "0000:d8:00.0", 00:38:51.972 "trid": { 00:38:51.972 "trtype": "PCIe", 00:38:51.972 "traddr": "0000:d8:00.0" 00:38:51.972 }, 00:38:51.972 "ctrlr_data": { 00:38:51.972 "cntlid": 0, 00:38:51.972 "vendor_id": "0x8086", 00:38:51.972 "model_number": "INTEL SSDPE2KX020T8", 00:38:51.972 "serial_number": "BTLJ125505KA2P0BGN", 00:38:51.972 "firmware_revision": "VDV10170", 00:38:51.972 "oacs": { 00:38:51.972 "security": 0, 00:38:51.972 "format": 1, 00:38:51.972 "firmware": 1, 00:38:51.972 "ns_manage": 1 00:38:51.972 }, 00:38:51.972 "multi_ctrlr": false, 00:38:51.972 "ana_reporting": false 00:38:51.972 }, 00:38:51.972 "vs": { 00:38:51.972 "nvme_version": "1.2" 00:38:51.972 }, 00:38:51.972 "ns_data": { 00:38:51.972 "id": 1, 00:38:51.972 "can_share": false 00:38:51.972 } 00:38:51.972 } 00:38:51.972 ], 00:38:51.972 "mp_policy": "active_passive" 00:38:51.972 } 00:38:51.972 } 00:38:51.972 ] 00:38:51.972 16:55:48 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:51.972 16:55:48 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:38:53.349 f00ff483-87dd-4545-870c-0111b8daa204 00:38:53.349 16:55:49 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:38:53.349 44083820-f0b1-40e7-aa24-dcf8fe71277a 00:38:53.349 16:55:50 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:38:53.350 16:55:50 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:38:53.350 16:55:50 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:53.350 16:55:50 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:53.350 16:55:50 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:53.350 16:55:50 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:53.350 16:55:50 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:53.608 16:55:50 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:38:53.867 [ 00:38:53.867 { 00:38:53.867 "name": "44083820-f0b1-40e7-aa24-dcf8fe71277a", 00:38:53.867 "aliases": [ 00:38:53.867 "lvs0/lv0" 00:38:53.867 ], 00:38:53.867 "product_name": "Logical Volume", 00:38:53.867 "block_size": 512, 00:38:53.867 "num_blocks": 204800, 00:38:53.867 "uuid": "44083820-f0b1-40e7-aa24-dcf8fe71277a", 00:38:53.867 "assigned_rate_limits": { 00:38:53.867 "rw_ios_per_sec": 0, 00:38:53.867 "rw_mbytes_per_sec": 0, 00:38:53.867 "r_mbytes_per_sec": 0, 00:38:53.867 "w_mbytes_per_sec": 0 00:38:53.867 }, 00:38:53.867 "claimed": false, 00:38:53.867 "zoned": false, 00:38:53.867 "supported_io_types": { 00:38:53.867 "read": true, 00:38:53.867 "write": true, 00:38:53.867 "unmap": true, 00:38:53.867 "flush": false, 00:38:53.867 "reset": true, 00:38:53.867 "nvme_admin": false, 00:38:53.867 "nvme_io": false, 00:38:53.867 "nvme_io_md": false, 00:38:53.867 "write_zeroes": true, 00:38:53.867 "zcopy": false, 00:38:53.867 "get_zone_info": false, 00:38:53.867 "zone_management": false, 00:38:53.867 "zone_append": false, 00:38:53.867 "compare": false, 00:38:53.867 "compare_and_write": false, 00:38:53.867 "abort": false, 00:38:53.867 "seek_hole": true, 00:38:53.867 "seek_data": true, 00:38:53.867 "copy": false, 00:38:53.867 "nvme_iov_md": false 00:38:53.867 }, 00:38:53.867 "driver_specific": { 00:38:53.867 "lvol": { 00:38:53.867 "lvol_store_uuid": "f00ff483-87dd-4545-870c-0111b8daa204", 00:38:53.867 "base_bdev": "Nvme0n1", 00:38:53.867 "thin_provision": true, 00:38:53.867 "num_allocated_clusters": 0, 00:38:53.867 "snapshot": false, 00:38:53.867 "clone": false, 00:38:53.867 "esnap_clone": false 00:38:53.867 } 00:38:53.867 } 00:38:53.867 } 00:38:53.867 ] 00:38:53.867 16:55:50 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:53.867 16:55:50 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:38:53.867 16:55:50 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:38:54.126 [2024-07-24 16:55:50.755299] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:38:54.126 COMP_lvs0/lv0 00:38:54.126 16:55:50 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:38:54.126 16:55:50 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:38:54.126 16:55:50 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:54.126 16:55:50 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:54.126 16:55:50 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:54.126 16:55:50 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:54.126 16:55:50 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:54.385 16:55:50 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:38:54.385 [ 00:38:54.385 { 00:38:54.385 "name": "COMP_lvs0/lv0", 00:38:54.385 "aliases": [ 00:38:54.385 "8a44b5f3-5eb3-5b06-8d56-8be3eff0fe75" 00:38:54.385 ], 00:38:54.385 "product_name": "compress", 00:38:54.385 "block_size": 512, 00:38:54.385 "num_blocks": 200704, 00:38:54.385 "uuid": "8a44b5f3-5eb3-5b06-8d56-8be3eff0fe75", 00:38:54.385 "assigned_rate_limits": { 00:38:54.385 "rw_ios_per_sec": 0, 00:38:54.385 "rw_mbytes_per_sec": 0, 00:38:54.385 "r_mbytes_per_sec": 0, 00:38:54.385 "w_mbytes_per_sec": 0 00:38:54.385 }, 00:38:54.385 "claimed": false, 00:38:54.385 "zoned": false, 00:38:54.385 "supported_io_types": { 00:38:54.385 "read": true, 00:38:54.385 "write": true, 00:38:54.385 "unmap": false, 00:38:54.385 "flush": false, 00:38:54.385 "reset": false, 00:38:54.385 "nvme_admin": false, 00:38:54.385 "nvme_io": false, 00:38:54.385 "nvme_io_md": false, 00:38:54.385 "write_zeroes": true, 00:38:54.385 "zcopy": false, 00:38:54.385 "get_zone_info": false, 00:38:54.385 "zone_management": false, 00:38:54.385 "zone_append": false, 00:38:54.385 "compare": false, 00:38:54.385 "compare_and_write": false, 00:38:54.385 "abort": false, 00:38:54.385 "seek_hole": false, 00:38:54.385 "seek_data": false, 00:38:54.385 "copy": false, 00:38:54.385 "nvme_iov_md": false 00:38:54.385 }, 00:38:54.385 "driver_specific": { 00:38:54.385 "compress": { 00:38:54.385 "name": "COMP_lvs0/lv0", 00:38:54.385 "base_bdev_name": "44083820-f0b1-40e7-aa24-dcf8fe71277a", 00:38:54.385 "pm_path": "/tmp/pmem/8c68b0ae-8836-41fa-ad8e-c5fc10188cdb" 00:38:54.385 } 00:38:54.385 } 00:38:54.385 } 00:38:54.386 ] 00:38:54.386 16:55:51 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:54.386 16:55:51 compress_isal -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:54.644 16:55:51 compress_isal -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:38:54.902 16:55:51 compress_isal -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:55.161 [2024-07-24 16:55:51.883976] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:55.161 16:55:51 compress_isal -- compress/compress.sh@109 -- # perf_pid=1865421 00:38:55.161 16:55:51 compress_isal -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:38:55.161 16:55:51 compress_isal -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:55.161 16:55:51 compress_isal -- compress/compress.sh@113 -- # wait 1865421 00:38:55.420 [2024-07-24 16:55:52.201485] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:27.545 Initializing NVMe Controllers 00:39:27.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:27.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:27.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:27.545 Initialization complete. Launching workers. 00:39:27.545 ======================================================== 00:39:27.545 Latency(us) 00:39:27.545 Device Information : IOPS MiB/s Average min max 00:39:27.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 4507.80 17.61 14199.91 1828.64 31272.90 00:39:27.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2819.30 11.01 22704.04 3276.01 44886.20 00:39:27.545 ======================================================== 00:39:27.545 Total : 7327.10 28.62 17472.11 1828.64 44886.20 00:39:27.545 00:39:27.545 16:56:22 compress_isal -- compress/compress.sh@114 -- # destroy_vols 00:39:27.545 16:56:22 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:39:27.545 16:56:22 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:39:27.545 16:56:22 compress_isal -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:39:27.545 16:56:22 compress_isal -- compress/compress.sh@117 -- # nvmftestfini 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@117 -- # sync 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@120 -- # set +e 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:27.545 rmmod nvme_tcp 00:39:27.545 rmmod nvme_fabrics 00:39:27.545 rmmod nvme_keyring 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@124 -- # set -e 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@125 -- # return 0 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@489 -- # '[' -n 1864057 ']' 00:39:27.545 16:56:22 compress_isal -- nvmf/common.sh@490 -- # killprocess 1864057 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 1864057 ']' 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@954 -- # kill -0 1864057 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@955 -- # uname 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1864057 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1864057' 00:39:27.545 killing process with pid 1864057 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@969 -- # kill 1864057 00:39:27.545 16:56:22 compress_isal -- common/autotest_common.sh@974 -- # wait 1864057 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.078 16:56:26 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:30.078 16:56:26 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:30.078 16:56:26 compress_isal -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:30.078 16:56:26 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:39:30.078 00:39:30.078 real 2m22.664s 00:39:30.078 user 6m21.969s 00:39:30.078 sys 0m20.043s 00:39:30.078 16:56:26 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:30.078 16:56:26 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:30.078 ************************************ 00:39:30.078 END TEST compress_isal 00:39:30.078 ************************************ 00:39:30.078 16:56:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:30.078 16:56:26 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:39:30.078 16:56:26 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:39:30.078 16:56:26 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:30.078 16:56:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:30.078 16:56:26 -- common/autotest_common.sh@10 -- # set +x 00:39:30.078 ************************************ 00:39:30.078 START TEST blockdev_crypto_aesni 00:39:30.078 ************************************ 00:39:30.078 16:56:26 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:39:30.337 * Looking for test storage... 00:39:30.337 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:30.337 16:56:26 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:39:30.337 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1871185 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:39:30.338 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 1871185 00:39:30.338 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 1871185 ']' 00:39:30.338 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:30.338 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:30.338 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:30.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:30.338 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:30.338 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:30.338 [2024-07-24 16:56:27.137899] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:39:30.338 [2024-07-24 16:56:27.138021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871185 ] 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.597 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:30.597 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.598 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:30.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:30.598 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:30.598 [2024-07-24 16:56:27.364936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:30.857 [2024-07-24 16:56:27.627486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:31.115 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:31.115 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:39:31.115 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:39:31.115 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:39:31.115 16:56:27 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:39:31.115 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:31.115 16:56:27 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:31.374 [2024-07-24 16:56:27.989292] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:31.375 [2024-07-24 16:56:27.997346] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:31.375 [2024-07-24 16:56:28.005363] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:31.633 [2024-07-24 16:56:28.355103] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:35.826 true 00:39:35.826 true 00:39:35.826 true 00:39:35.826 true 00:39:35.826 Malloc0 00:39:35.826 Malloc1 00:39:35.826 Malloc2 00:39:35.826 Malloc3 00:39:35.826 [2024-07-24 16:56:32.237987] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:35.826 crypto_ram 00:39:35.826 [2024-07-24 16:56:32.246163] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:35.826 crypto_ram2 00:39:35.827 [2024-07-24 16:56:32.254167] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:35.827 crypto_ram3 00:39:35.827 [2024-07-24 16:56:32.262205] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:35.827 crypto_ram4 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8b1dc26b-80d5-5a13-a85c-dcbb3dc08e7a"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8b1dc26b-80d5-5a13-a85c-dcbb3dc08e7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "f2cfbfd0-af2f-537e-9d71-42c3ba480c16"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f2cfbfd0-af2f-537e-9d71-42c3ba480c16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b16012fc-ec6a-53e4-a2ae-2bb1f74b3fcb"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b16012fc-ec6a-53e4-a2ae-2bb1f74b3fcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b527d805-5bba-566b-8c01-ef0a6d4b3424"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b527d805-5bba-566b-8c01-ef0a6d4b3424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:39:35.827 16:56:32 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 1871185 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 1871185 ']' 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 1871185 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1871185 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1871185' 00:39:35.827 killing process with pid 1871185 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 1871185 00:39:35.827 16:56:32 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 1871185 00:39:40.020 16:56:36 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:40.020 16:56:36 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:40.020 16:56:36 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:39:40.020 16:56:36 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:40.020 16:56:36 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:40.020 ************************************ 00:39:40.020 START TEST bdev_hello_world 00:39:40.020 ************************************ 00:39:40.020 16:56:36 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:40.020 [2024-07-24 16:56:36.777070] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:39:40.020 [2024-07-24 16:56:36.777188] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872731 ] 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:40.279 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.279 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:40.280 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:40.280 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:40.280 [2024-07-24 16:56:37.001546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.538 [2024-07-24 16:56:37.279266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.538 [2024-07-24 16:56:37.301104] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:40.538 [2024-07-24 16:56:37.309125] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:40.538 [2024-07-24 16:56:37.317145] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:41.176 [2024-07-24 16:56:37.705038] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:43.712 [2024-07-24 16:56:40.555542] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:43.712 [2024-07-24 16:56:40.555628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:43.712 [2024-07-24 16:56:40.555652] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:43.712 [2024-07-24 16:56:40.563555] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:43.712 [2024-07-24 16:56:40.563596] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:43.712 [2024-07-24 16:56:40.563613] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:43.712 [2024-07-24 16:56:40.571586] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:43.712 [2024-07-24 16:56:40.571627] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:43.712 [2024-07-24 16:56:40.571643] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:43.972 [2024-07-24 16:56:40.579588] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:43.972 [2024-07-24 16:56:40.579622] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:43.972 [2024-07-24 16:56:40.579637] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:43.972 [2024-07-24 16:56:40.819628] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:43.972 [2024-07-24 16:56:40.819679] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:39:43.972 [2024-07-24 16:56:40.819705] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:43.972 [2024-07-24 16:56:40.821996] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:43.972 [2024-07-24 16:56:40.822106] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:43.972 [2024-07-24 16:56:40.822129] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:43.972 [2024-07-24 16:56:40.822211] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:43.972 00:39:43.972 [2024-07-24 16:56:40.822242] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:46.507 00:39:46.507 real 0m6.638s 00:39:46.507 user 0m6.063s 00:39:46.507 sys 0m0.527s 00:39:46.507 16:56:43 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:46.507 16:56:43 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:46.507 ************************************ 00:39:46.507 END TEST bdev_hello_world 00:39:46.507 ************************************ 00:39:46.507 16:56:43 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:39:46.507 16:56:43 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:46.507 16:56:43 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:46.507 16:56:43 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:46.766 ************************************ 00:39:46.766 START TEST bdev_bounds 00:39:46.766 ************************************ 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1873855 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1873855' 00:39:46.766 Process bdevio pid: 1873855 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1873855 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1873855 ']' 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:46.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:46.766 16:56:43 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:46.766 [2024-07-24 16:56:43.499088] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:39:46.766 [2024-07-24 16:56:43.499212] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1873855 ] 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:47.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:47.025 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:47.025 [2024-07-24 16:56:43.725122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:47.284 [2024-07-24 16:56:43.994750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:47.284 [2024-07-24 16:56:43.994818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:47.284 [2024-07-24 16:56:43.994821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:47.284 [2024-07-24 16:56:44.016641] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:47.284 [2024-07-24 16:56:44.024661] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:47.284 [2024-07-24 16:56:44.032688] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:47.544 [2024-07-24 16:56:44.400338] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:50.836 [2024-07-24 16:56:47.299275] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:50.836 [2024-07-24 16:56:47.299363] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:50.836 [2024-07-24 16:56:47.299384] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:50.836 [2024-07-24 16:56:47.307266] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:50.836 [2024-07-24 16:56:47.307303] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:50.836 [2024-07-24 16:56:47.307319] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:50.836 [2024-07-24 16:56:47.315314] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:50.836 [2024-07-24 16:56:47.315369] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:50.836 [2024-07-24 16:56:47.315385] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:50.836 [2024-07-24 16:56:47.323307] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:50.836 [2024-07-24 16:56:47.323339] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:50.836 [2024-07-24 16:56:47.323353] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:51.095 16:56:47 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:51.095 16:56:47 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:39:51.095 16:56:47 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:51.095 I/O targets: 00:39:51.095 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:39:51.095 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:39:51.095 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:39:51.095 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:39:51.095 00:39:51.095 00:39:51.095 CUnit - A unit testing framework for C - Version 2.1-3 00:39:51.095 http://cunit.sourceforge.net/ 00:39:51.095 00:39:51.095 00:39:51.095 Suite: bdevio tests on: crypto_ram4 00:39:51.095 Test: blockdev write read block ...passed 00:39:51.095 Test: blockdev write zeroes read block ...passed 00:39:51.095 Test: blockdev write zeroes read no split ...passed 00:39:51.355 Test: blockdev write zeroes read split ...passed 00:39:51.355 Test: blockdev write zeroes read split partial ...passed 00:39:51.355 Test: blockdev reset ...passed 00:39:51.355 Test: blockdev write read 8 blocks ...passed 00:39:51.355 Test: blockdev write read size > 128k ...passed 00:39:51.355 Test: blockdev write read invalid size ...passed 00:39:51.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:51.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:51.355 Test: blockdev write read max offset ...passed 00:39:51.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:51.355 Test: blockdev writev readv 8 blocks ...passed 00:39:51.355 Test: blockdev writev readv 30 x 1block ...passed 00:39:51.355 Test: blockdev writev readv block ...passed 00:39:51.355 Test: blockdev writev readv size > 128k ...passed 00:39:51.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:51.355 Test: blockdev comparev and writev ...passed 00:39:51.355 Test: blockdev nvme passthru rw ...passed 00:39:51.355 Test: blockdev nvme passthru vendor specific ...passed 00:39:51.355 Test: blockdev nvme admin passthru ...passed 00:39:51.355 Test: blockdev copy ...passed 00:39:51.355 Suite: bdevio tests on: crypto_ram3 00:39:51.355 Test: blockdev write read block ...passed 00:39:51.355 Test: blockdev write zeroes read block ...passed 00:39:51.355 Test: blockdev write zeroes read no split ...passed 00:39:51.355 Test: blockdev write zeroes read split ...passed 00:39:51.355 Test: blockdev write zeroes read split partial ...passed 00:39:51.355 Test: blockdev reset ...passed 00:39:51.355 Test: blockdev write read 8 blocks ...passed 00:39:51.355 Test: blockdev write read size > 128k ...passed 00:39:51.355 Test: blockdev write read invalid size ...passed 00:39:51.355 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:51.355 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:51.355 Test: blockdev write read max offset ...passed 00:39:51.355 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:51.355 Test: blockdev writev readv 8 blocks ...passed 00:39:51.355 Test: blockdev writev readv 30 x 1block ...passed 00:39:51.355 Test: blockdev writev readv block ...passed 00:39:51.355 Test: blockdev writev readv size > 128k ...passed 00:39:51.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:51.355 Test: blockdev comparev and writev ...passed 00:39:51.355 Test: blockdev nvme passthru rw ...passed 00:39:51.355 Test: blockdev nvme passthru vendor specific ...passed 00:39:51.355 Test: blockdev nvme admin passthru ...passed 00:39:51.355 Test: blockdev copy ...passed 00:39:51.355 Suite: bdevio tests on: crypto_ram2 00:39:51.355 Test: blockdev write read block ...passed 00:39:51.355 Test: blockdev write zeroes read block ...passed 00:39:51.355 Test: blockdev write zeroes read no split ...passed 00:39:51.615 Test: blockdev write zeroes read split ...passed 00:39:51.615 Test: blockdev write zeroes read split partial ...passed 00:39:51.615 Test: blockdev reset ...passed 00:39:51.615 Test: blockdev write read 8 blocks ...passed 00:39:51.615 Test: blockdev write read size > 128k ...passed 00:39:51.615 Test: blockdev write read invalid size ...passed 00:39:51.615 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:51.615 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:51.615 Test: blockdev write read max offset ...passed 00:39:51.615 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:51.615 Test: blockdev writev readv 8 blocks ...passed 00:39:51.615 Test: blockdev writev readv 30 x 1block ...passed 00:39:51.615 Test: blockdev writev readv block ...passed 00:39:51.615 Test: blockdev writev readv size > 128k ...passed 00:39:51.615 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:51.615 Test: blockdev comparev and writev ...passed 00:39:51.615 Test: blockdev nvme passthru rw ...passed 00:39:51.615 Test: blockdev nvme passthru vendor specific ...passed 00:39:51.615 Test: blockdev nvme admin passthru ...passed 00:39:51.615 Test: blockdev copy ...passed 00:39:51.615 Suite: bdevio tests on: crypto_ram 00:39:51.615 Test: blockdev write read block ...passed 00:39:51.615 Test: blockdev write zeroes read block ...passed 00:39:51.615 Test: blockdev write zeroes read no split ...passed 00:39:51.615 Test: blockdev write zeroes read split ...passed 00:39:51.875 Test: blockdev write zeroes read split partial ...passed 00:39:51.875 Test: blockdev reset ...passed 00:39:51.875 Test: blockdev write read 8 blocks ...passed 00:39:51.875 Test: blockdev write read size > 128k ...passed 00:39:51.875 Test: blockdev write read invalid size ...passed 00:39:51.875 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:51.875 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:51.875 Test: blockdev write read max offset ...passed 00:39:51.875 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:51.875 Test: blockdev writev readv 8 blocks ...passed 00:39:51.875 Test: blockdev writev readv 30 x 1block ...passed 00:39:51.875 Test: blockdev writev readv block ...passed 00:39:51.875 Test: blockdev writev readv size > 128k ...passed 00:39:51.875 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:51.875 Test: blockdev comparev and writev ...passed 00:39:51.875 Test: blockdev nvme passthru rw ...passed 00:39:51.875 Test: blockdev nvme passthru vendor specific ...passed 00:39:51.875 Test: blockdev nvme admin passthru ...passed 00:39:51.875 Test: blockdev copy ...passed 00:39:51.875 00:39:51.875 Run Summary: Type Total Ran Passed Failed Inactive 00:39:51.875 suites 4 4 n/a 0 0 00:39:51.875 tests 92 92 92 0 0 00:39:51.875 asserts 520 520 520 0 n/a 00:39:51.875 00:39:51.875 Elapsed time = 1.599 seconds 00:39:51.875 0 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1873855 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1873855 ']' 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1873855 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1873855 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1873855' 00:39:51.875 killing process with pid 1873855 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1873855 00:39:51.875 16:56:48 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1873855 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:39:54.412 00:39:54.412 real 0m7.766s 00:39:54.412 user 0m21.078s 00:39:54.412 sys 0m0.785s 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:54.412 ************************************ 00:39:54.412 END TEST bdev_bounds 00:39:54.412 ************************************ 00:39:54.412 16:56:51 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:39:54.412 16:56:51 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:39:54.412 16:56:51 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:54.412 16:56:51 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:54.412 ************************************ 00:39:54.412 START TEST bdev_nbd 00:39:54.412 ************************************ 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:39:54.412 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1875036 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1875036 /var/tmp/spdk-nbd.sock 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1875036 ']' 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:54.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:54.413 16:56:51 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:39:54.672 [2024-07-24 16:56:51.346037] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:39:54.672 [2024-07-24 16:56:51.346165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:54.672 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:54.672 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:54.931 [2024-07-24 16:56:51.572715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:55.190 [2024-07-24 16:56:51.864847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.190 [2024-07-24 16:56:51.886607] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:55.190 [2024-07-24 16:56:51.894645] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:55.190 [2024-07-24 16:56:51.902664] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:55.449 [2024-07-24 16:56:52.272624] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:58.738 [2024-07-24 16:56:55.153342] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:58.738 [2024-07-24 16:56:55.153410] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:58.738 [2024-07-24 16:56:55.153433] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.738 [2024-07-24 16:56:55.161357] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:58.738 [2024-07-24 16:56:55.161395] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:58.738 [2024-07-24 16:56:55.161411] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.738 [2024-07-24 16:56:55.169391] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:58.738 [2024-07-24 16:56:55.169434] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:58.738 [2024-07-24 16:56:55.169449] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:58.738 [2024-07-24 16:56:55.177373] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:58.738 [2024-07-24 16:56:55.177430] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:58.738 [2024-07-24 16:56:55.177445] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:39:59.308 16:56:55 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:59.308 1+0 records in 00:39:59.308 1+0 records out 00:39:59.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318439 s, 12.9 MB/s 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:39:59.308 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:59.595 1+0 records in 00:39:59.595 1+0 records out 00:39:59.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362236 s, 11.3 MB/s 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:39:59.595 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:59.855 1+0 records in 00:39:59.855 1+0 records out 00:39:59.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343121 s, 11.9 MB/s 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:39:59.855 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:40:00.114 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:40:00.114 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:40:00.114 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:00.115 1+0 records in 00:40:00.115 1+0 records out 00:40:00.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344433 s, 11.9 MB/s 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:00.115 16:56:56 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd0", 00:40:00.374 "bdev_name": "crypto_ram" 00:40:00.374 }, 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd1", 00:40:00.374 "bdev_name": "crypto_ram2" 00:40:00.374 }, 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd2", 00:40:00.374 "bdev_name": "crypto_ram3" 00:40:00.374 }, 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd3", 00:40:00.374 "bdev_name": "crypto_ram4" 00:40:00.374 } 00:40:00.374 ]' 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd0", 00:40:00.374 "bdev_name": "crypto_ram" 00:40:00.374 }, 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd1", 00:40:00.374 "bdev_name": "crypto_ram2" 00:40:00.374 }, 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd2", 00:40:00.374 "bdev_name": "crypto_ram3" 00:40:00.374 }, 00:40:00.374 { 00:40:00.374 "nbd_device": "/dev/nbd3", 00:40:00.374 "bdev_name": "crypto_ram4" 00:40:00.374 } 00:40:00.374 ]' 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:00.374 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:00.634 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:00.893 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:01.151 16:56:57 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:01.410 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:01.669 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:01.670 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:40:01.929 /dev/nbd0 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:01.929 1+0 records in 00:40:01.929 1+0 records out 00:40:01.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297467 s, 13.8 MB/s 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:01.929 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:40:02.189 /dev/nbd1 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:02.189 1+0 records in 00:40:02.189 1+0 records out 00:40:02.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027469 s, 14.9 MB/s 00:40:02.189 16:56:58 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:02.189 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:40:02.448 /dev/nbd10 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:02.448 1+0 records in 00:40:02.448 1+0 records out 00:40:02.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243677 s, 16.8 MB/s 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:02.448 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:40:02.707 /dev/nbd11 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:02.707 1+0 records in 00:40:02.707 1+0 records out 00:40:02.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390971 s, 10.5 MB/s 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:02.707 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd0", 00:40:02.966 "bdev_name": "crypto_ram" 00:40:02.966 }, 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd1", 00:40:02.966 "bdev_name": "crypto_ram2" 00:40:02.966 }, 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd10", 00:40:02.966 "bdev_name": "crypto_ram3" 00:40:02.966 }, 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd11", 00:40:02.966 "bdev_name": "crypto_ram4" 00:40:02.966 } 00:40:02.966 ]' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd0", 00:40:02.966 "bdev_name": "crypto_ram" 00:40:02.966 }, 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd1", 00:40:02.966 "bdev_name": "crypto_ram2" 00:40:02.966 }, 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd10", 00:40:02.966 "bdev_name": "crypto_ram3" 00:40:02.966 }, 00:40:02.966 { 00:40:02.966 "nbd_device": "/dev/nbd11", 00:40:02.966 "bdev_name": "crypto_ram4" 00:40:02.966 } 00:40:02.966 ]' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:40:02.966 /dev/nbd1 00:40:02.966 /dev/nbd10 00:40:02.966 /dev/nbd11' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:40:02.966 /dev/nbd1 00:40:02.966 /dev/nbd10 00:40:02.966 /dev/nbd11' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:02.966 256+0 records in 00:40:02.966 256+0 records out 00:40:02.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105526 s, 99.4 MB/s 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:02.966 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:03.225 256+0 records in 00:40:03.225 256+0 records out 00:40:03.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.04369 s, 24.0 MB/s 00:40:03.225 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:40:03.225 256+0 records in 00:40:03.225 256+0 records out 00:40:03.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0560692 s, 18.7 MB/s 00:40:03.225 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:40:03.225 256+0 records in 00:40:03.225 256+0 records out 00:40:03.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0423216 s, 24.8 MB/s 00:40:03.225 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:56:59 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:40:03.225 256+0 records in 00:40:03.225 256+0 records out 00:40:03.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0389225 s, 26.9 MB/s 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:03.225 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:03.484 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:03.743 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:04.002 16:57:00 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:40:04.260 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:04.261 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:40:04.520 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:04.779 malloc_lvol_verify 00:40:04.779 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:05.038 f66569e2-e02a-43e5-b7a5-65cf75ebedf6 00:40:05.038 16:57:01 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:05.296 82556349-f4c6-4cea-a58d-002fa5a06947 00:40:05.296 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:05.554 /dev/nbd0 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:40:05.554 mke2fs 1.46.5 (30-Dec-2021) 00:40:05.554 Discarding device blocks: 0/4096 done 00:40:05.554 Creating filesystem with 4096 1k blocks and 1024 inodes 00:40:05.554 00:40:05.554 Allocating group tables: 0/1 done 00:40:05.554 Writing inode tables: 0/1 done 00:40:05.554 Creating journal (1024 blocks): done 00:40:05.554 Writing superblocks and filesystem accounting information: 0/1 done 00:40:05.554 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:05.554 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1875036 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1875036 ']' 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1875036 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1875036 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1875036' 00:40:05.812 killing process with pid 1875036 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1875036 00:40:05.812 16:57:02 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1875036 00:40:09.096 16:57:05 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:40:09.096 00:40:09.096 real 0m13.975s 00:40:09.096 user 0m17.124s 00:40:09.096 sys 0m4.019s 00:40:09.096 16:57:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:09.096 16:57:05 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:09.096 ************************************ 00:40:09.096 END TEST bdev_nbd 00:40:09.096 ************************************ 00:40:09.096 16:57:05 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:40:09.097 16:57:05 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:40:09.097 16:57:05 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:09.097 16:57:05 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:09.097 ************************************ 00:40:09.097 START TEST bdev_fio 00:40:09.097 ************************************ 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:40:09.097 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:09.097 ************************************ 00:40:09.097 START TEST bdev_fio_rw_verify 00:40:09.097 ************************************ 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:09.097 16:57:05 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:09.097 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:09.097 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:09.097 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:09.097 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:09.097 fio-3.35 00:40:09.097 Starting 4 threads 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:09.356 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:09.356 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:24.235 00:40:24.235 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1879008: Wed Jul 24 16:57:19 2024 00:40:24.235 read: IOPS=24.1k, BW=94.1MiB/s (98.6MB/s)(941MiB/10001msec) 00:40:24.235 slat (usec): min=18, max=543, avg=53.29, stdev=32.87 00:40:24.235 clat (usec): min=13, max=1328, avg=300.09, stdev=201.11 00:40:24.235 lat (usec): min=43, max=1464, avg=353.38, stdev=222.54 00:40:24.235 clat percentiles (usec): 00:40:24.235 | 50.000th=[ 249], 99.000th=[ 1004], 99.900th=[ 1156], 99.990th=[ 1237], 00:40:24.235 | 99.999th=[ 1303] 00:40:24.235 write: IOPS=26.4k, BW=103MiB/s (108MB/s)(1003MiB/9742msec); 0 zone resets 00:40:24.235 slat (usec): min=27, max=505, avg=65.93, stdev=32.46 00:40:24.235 clat (usec): min=35, max=3008, avg=367.20, stdev=240.68 00:40:24.235 lat (usec): min=84, max=3267, avg=433.13, stdev=261.08 00:40:24.235 clat percentiles (usec): 00:40:24.235 | 50.000th=[ 318], 99.000th=[ 1254], 99.900th=[ 1418], 99.990th=[ 1778], 00:40:24.235 | 99.999th=[ 2704] 00:40:24.235 bw ( KiB/s): min=83776, max=129607, per=97.94%, avg=103288.79, stdev=2797.95, samples=76 00:40:24.235 iops : min=20944, max=32401, avg=25822.16, stdev=699.47, samples=76 00:40:24.235 lat (usec) : 20=0.01%, 50=0.01%, 100=7.04%, 250=35.28%, 500=40.54% 00:40:24.235 lat (usec) : 750=10.51%, 1000=4.74% 00:40:24.235 lat (msec) : 2=1.88%, 4=0.01% 00:40:24.235 cpu : usr=99.24%, sys=0.23%, ctx=79, majf=0, minf=25251 00:40:24.235 IO depths : 1=10.3%, 2=25.5%, 4=51.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:24.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.235 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.235 issued rwts: total=240844,256855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:24.235 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:24.235 00:40:24.235 Run status group 0 (all jobs): 00:40:24.235 READ: bw=94.1MiB/s (98.6MB/s), 94.1MiB/s-94.1MiB/s (98.6MB/s-98.6MB/s), io=941MiB (986MB), run=10001-10001msec 00:40:24.235 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=1003MiB (1052MB), run=9742-9742msec 00:40:26.173 ----------------------------------------------------- 00:40:26.174 Suppressions used: 00:40:26.174 count bytes template 00:40:26.174 4 47 /usr/src/fio/parse.c 00:40:26.174 813 78048 /usr/src/fio/iolog.c 00:40:26.174 1 8 libtcmalloc_minimal.so 00:40:26.174 1 904 libcrypto.so 00:40:26.174 ----------------------------------------------------- 00:40:26.174 00:40:26.174 00:40:26.174 real 0m17.213s 00:40:26.174 user 0m58.062s 00:40:26.174 sys 0m0.911s 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:40:26.174 ************************************ 00:40:26.174 END TEST bdev_fio_rw_verify 00:40:26.174 ************************************ 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8b1dc26b-80d5-5a13-a85c-dcbb3dc08e7a"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8b1dc26b-80d5-5a13-a85c-dcbb3dc08e7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "f2cfbfd0-af2f-537e-9d71-42c3ba480c16"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f2cfbfd0-af2f-537e-9d71-42c3ba480c16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b16012fc-ec6a-53e4-a2ae-2bb1f74b3fcb"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b16012fc-ec6a-53e4-a2ae-2bb1f74b3fcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b527d805-5bba-566b-8c01-ef0a6d4b3424"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b527d805-5bba-566b-8c01-ef0a6d4b3424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:40:26.174 crypto_ram2 00:40:26.174 crypto_ram3 00:40:26.174 crypto_ram4 ]] 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8b1dc26b-80d5-5a13-a85c-dcbb3dc08e7a"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8b1dc26b-80d5-5a13-a85c-dcbb3dc08e7a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "f2cfbfd0-af2f-537e-9d71-42c3ba480c16"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "f2cfbfd0-af2f-537e-9d71-42c3ba480c16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "b16012fc-ec6a-53e4-a2ae-2bb1f74b3fcb"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b16012fc-ec6a-53e4-a2ae-2bb1f74b3fcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "b527d805-5bba-566b-8c01-ef0a6d4b3424"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "b527d805-5bba-566b-8c01-ef0a6d4b3424",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:40:26.174 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:26.175 ************************************ 00:40:26.175 START TEST bdev_fio_trim 00:40:26.175 ************************************ 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:26.175 16:57:22 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:26.434 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:26.434 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:26.434 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:26.434 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:26.434 fio-3.35 00:40:26.434 Starting 4 threads 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:26.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:26.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:41.571 00:40:41.571 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1881795: Wed Jul 24 16:57:37 2024 00:40:41.571 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(1522MiB/10001msec); 0 zone resets 00:40:41.571 slat (usec): min=18, max=479, avg=57.05, stdev=28.31 00:40:41.571 clat (usec): min=48, max=1718, avg=263.79, stdev=155.32 00:40:41.571 lat (usec): min=82, max=1946, avg=320.84, stdev=171.68 00:40:41.571 clat percentiles (usec): 00:40:41.571 | 50.000th=[ 227], 99.000th=[ 775], 99.900th=[ 922], 99.990th=[ 1029], 00:40:41.571 | 99.999th=[ 1500] 00:40:41.571 bw ( KiB/s): min=142584, max=212672, per=100.00%, avg=156458.95, stdev=4608.74, samples=76 00:40:41.571 iops : min=35646, max=53168, avg=39114.74, stdev=1152.19, samples=76 00:40:41.571 trim: IOPS=39.0k, BW=152MiB/s (160MB/s)(1522MiB/10001msec); 0 zone resets 00:40:41.571 slat (usec): min=5, max=512, avg=16.43, stdev= 6.46 00:40:41.571 clat (usec): min=52, max=1204, avg=247.86, stdev=103.89 00:40:41.571 lat (usec): min=57, max=1291, avg=264.29, stdev=105.58 00:40:41.571 clat percentiles (usec): 00:40:41.571 | 50.000th=[ 235], 99.000th=[ 537], 99.900th=[ 635], 99.990th=[ 750], 00:40:41.571 | 99.999th=[ 1012] 00:40:41.571 bw ( KiB/s): min=142592, max=212688, per=100.00%, avg=156460.21, stdev=4609.57, samples=76 00:40:41.571 iops : min=35648, max=53172, avg=39115.05, stdev=1152.39, samples=76 00:40:41.571 lat (usec) : 50=0.01%, 100=5.91%, 250=50.83%, 500=37.83%, 750=4.75% 00:40:41.571 lat (usec) : 1000=0.67% 00:40:41.571 lat (msec) : 2=0.01% 00:40:41.571 cpu : usr=99.51%, sys=0.06%, ctx=98, majf=0, minf=7684 00:40:41.571 IO depths : 1=7.6%, 2=26.4%, 4=52.8%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:41.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.571 complete : 0=0.0%, 4=88.3%, 8=11.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:41.571 issued rwts: total=0,389579,389580,0 short=0,0,0,0 dropped=0,0,0,0 00:40:41.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:41.571 00:40:41.571 Run status group 0 (all jobs): 00:40:41.571 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1522MiB (1596MB), run=10001-10001msec 00:40:41.571 TRIM: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1522MiB (1596MB), run=10001-10001msec 00:40:43.481 ----------------------------------------------------- 00:40:43.481 Suppressions used: 00:40:43.481 count bytes template 00:40:43.481 4 47 /usr/src/fio/parse.c 00:40:43.481 1 8 libtcmalloc_minimal.so 00:40:43.481 1 904 libcrypto.so 00:40:43.481 ----------------------------------------------------- 00:40:43.481 00:40:43.481 00:40:43.481 real 0m17.105s 00:40:43.481 user 0m57.682s 00:40:43.481 sys 0m0.941s 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:40:43.481 ************************************ 00:40:43.481 END TEST bdev_fio_trim 00:40:43.481 ************************************ 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:40:43.481 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:40:43.481 00:40:43.481 real 0m34.666s 00:40:43.481 user 1m55.924s 00:40:43.481 sys 0m2.041s 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:43.481 16:57:39 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:43.481 ************************************ 00:40:43.481 END TEST bdev_fio 00:40:43.481 ************************************ 00:40:43.481 16:57:39 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:43.481 16:57:39 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:43.481 16:57:39 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:40:43.481 16:57:40 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:43.481 16:57:40 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:43.481 ************************************ 00:40:43.481 START TEST bdev_verify 00:40:43.481 ************************************ 00:40:43.481 16:57:40 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:43.481 [2024-07-24 16:57:40.149087] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:40:43.481 [2024-07-24 16:57:40.149212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1883916 ] 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:43.481 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:43.481 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:43.740 [2024-07-24 16:57:40.377237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:44.015 [2024-07-24 16:57:40.654808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.015 [2024-07-24 16:57:40.654813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:44.015 [2024-07-24 16:57:40.676685] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:44.015 [2024-07-24 16:57:40.684715] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:44.015 [2024-07-24 16:57:40.692728] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:44.275 [2024-07-24 16:57:41.076631] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:40:47.564 [2024-07-24 16:57:43.965057] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:40:47.564 [2024-07-24 16:57:43.965151] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:47.564 [2024-07-24 16:57:43.965171] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:47.564 [2024-07-24 16:57:43.973078] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:40:47.564 [2024-07-24 16:57:43.973117] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:47.564 [2024-07-24 16:57:43.973134] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:47.564 [2024-07-24 16:57:43.981127] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:40:47.564 [2024-07-24 16:57:43.981169] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:40:47.564 [2024-07-24 16:57:43.981190] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:47.564 [2024-07-24 16:57:43.989114] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:40:47.564 [2024-07-24 16:57:43.989177] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:40:47.564 [2024-07-24 16:57:43.989194] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:47.564 Running I/O for 5 seconds... 00:40:52.839 00:40:52.839 Latency(us) 00:40:52.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:52.839 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x0 length 0x1000 00:40:52.839 crypto_ram : 5.06 480.53 1.88 0.00 0.00 265684.32 14365.49 175321.91 00:40:52.839 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x1000 length 0x1000 00:40:52.839 crypto_ram : 5.06 480.25 1.88 0.00 0.00 265386.03 14575.21 176160.77 00:40:52.839 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x0 length 0x1000 00:40:52.839 crypto_ram2 : 5.06 480.44 1.88 0.00 0.00 264960.72 10957.62 164416.72 00:40:52.839 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x1000 length 0x1000 00:40:52.839 crypto_ram2 : 5.07 482.97 1.89 0.00 0.00 263337.72 448.92 164416.72 00:40:52.839 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x0 length 0x1000 00:40:52.839 crypto_ram3 : 5.05 3772.32 14.74 0.00 0.00 33651.23 3643.80 27472.69 00:40:52.839 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x1000 length 0x1000 00:40:52.839 crypto_ram3 : 5.06 3794.16 14.82 0.00 0.00 33451.88 3512.73 28101.84 00:40:52.839 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x0 length 0x1000 00:40:52.839 crypto_ram4 : 5.05 3772.93 14.74 0.00 0.00 33543.99 3722.44 26004.68 00:40:52.839 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:40:52.839 Verification LBA range: start 0x1000 length 0x1000 00:40:52.839 crypto_ram4 : 5.06 3794.72 14.82 0.00 0.00 33341.62 3643.80 25060.97 00:40:52.839 =================================================================================================================== 00:40:52.839 Total : 17058.33 66.63 0.00 0.00 59622.91 448.92 176160.77 00:40:55.432 00:40:55.432 real 0m12.046s 00:40:55.432 user 0m22.211s 00:40:55.432 sys 0m0.549s 00:40:55.432 16:57:52 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:55.432 16:57:52 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:40:55.432 ************************************ 00:40:55.432 END TEST bdev_verify 00:40:55.432 ************************************ 00:40:55.432 16:57:52 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:55.432 16:57:52 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:40:55.432 16:57:52 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:55.432 16:57:52 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:55.432 ************************************ 00:40:55.432 START TEST bdev_verify_big_io 00:40:55.432 ************************************ 00:40:55.432 16:57:52 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:40:55.432 [2024-07-24 16:57:52.263302] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:40:55.432 [2024-07-24 16:57:52.263419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885871 ] 00:40:55.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.693 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:55.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.693 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:55.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.693 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:55.694 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:55.694 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:55.694 [2024-07-24 16:57:52.489064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:55.951 [2024-07-24 16:57:52.771402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.951 [2024-07-24 16:57:52.771409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:55.951 [2024-07-24 16:57:52.793244] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:55.951 [2024-07-24 16:57:52.801270] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:55.951 [2024-07-24 16:57:52.809289] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:56.518 [2024-07-24 16:57:53.192716] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:40:59.803 [2024-07-24 16:57:56.067838] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:40:59.803 [2024-07-24 16:57:56.067919] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:59.803 [2024-07-24 16:57:56.067938] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:59.803 [2024-07-24 16:57:56.075858] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:40:59.803 [2024-07-24 16:57:56.075896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:59.803 [2024-07-24 16:57:56.075911] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:59.803 [2024-07-24 16:57:56.083906] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:40:59.803 [2024-07-24 16:57:56.083941] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:40:59.803 [2024-07-24 16:57:56.083956] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:59.803 [2024-07-24 16:57:56.091897] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:40:59.803 [2024-07-24 16:57:56.091951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:40:59.803 [2024-07-24 16:57:56.091966] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:59.803 Running I/O for 5 seconds... 00:41:02.336 [2024-07-24 16:57:59.139043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.140356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.141916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.143441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.146344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.148102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.148507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.148899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.150786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.152366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.153907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.155269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.158092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.158844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.159260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.159669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.161319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.162873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.164396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.164795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.167390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.167795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.168196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.169123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.171098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.172639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.173551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.174930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.176580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.176983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.177385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.178901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.180956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.182445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.183256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.184545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.186539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.186945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.188529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.190273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.192244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.193044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.194339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.336 [2024-07-24 16:57:59.195831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.197520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.198852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.200123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.201655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.202603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.204284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.205903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.207626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.209445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.210878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.597 [2024-07-24 16:57:59.212471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.213998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.215167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.216430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.217942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.219469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.222529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.223948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.225466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.227067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.229124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.230716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.232245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.233613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.236208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.237747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.239272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.240132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.241811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.243351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.244879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.245288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.248594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.250134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.251520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.252423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.254400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.255935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.256840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.257239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.260171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.261710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.262111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.263685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.265816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.267319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.267713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.268105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.270934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.271907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.273239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.274506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.276459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.276924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.277327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.277720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.280559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.281311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.282584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.284109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.285645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.286057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.286461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.287537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.289425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.291184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.292703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.294324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.295108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.295521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.295914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.297425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.300037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.301452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.302782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.303184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.304010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.305458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.306879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.307298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.308994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.309538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.309597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.311004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.312164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.313604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.313681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.314415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.315967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.317415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.317471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.317871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.318336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.320003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.320057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.320454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.321784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.322927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.322981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.323881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.324336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.325053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.598 [2024-07-24 16:57:59.325113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.325512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.326989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.327412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.327469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.328698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.329167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.329573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.329623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.330012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.331344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.332779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.332833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.334156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.334687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.335091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.335152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.335541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.338232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.338298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.339607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.339662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.340517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.340580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.341700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.341751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.344212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.344278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.344672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.344725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.345605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.345670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.347067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.347132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.349774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.349839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.350238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.350286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.351912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.351975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.353145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.353200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.354938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.355005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.355404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.355453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.357316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.357395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.358872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.358926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.360586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.360653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.361047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.361095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.362636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.362702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.363340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.363395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.365129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.365201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.365648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.365698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.367633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.367698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.368590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.368642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.370333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.370409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.371806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.371857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.372886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.372950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.374514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.374571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.376493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.376555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.378131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.378197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.378996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.379060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.380480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.380533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.382281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.382346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.382739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.382794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.383710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.383773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.384179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.384235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.599 [2024-07-24 16:57:59.386159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.386230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.386628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.386681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.386705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.387089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.387619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.387682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.388076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.388161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.388187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.388603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.390035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.390451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.390510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.390924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.391288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.391477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.391877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.391928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.392327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.392682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.393827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.393887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.393932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.393977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.394321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.394500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.394566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.394613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.394659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.395018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.396973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.397345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.399962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.400007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.400371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.401387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.401451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.401496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.401541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.401882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.402056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.402125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.402179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.402226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.402577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.403757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.403839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.403899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.403956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.404325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.404499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.404557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.404603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.404648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.405061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.406984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.407029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.407356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.408592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.408655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.408700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.408746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.409126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.409303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.409359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.409405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.409464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.600 [2024-07-24 16:57:59.409836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.410972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.411799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.412173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.413325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.413384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.413429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.413473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.413798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.413970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.414025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.414072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.414130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.414505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.416981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.417026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.417408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.418456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.418515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.418560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.418607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.418995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.419170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.419237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.419285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.419330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.419674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.420679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.420739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.420785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.420829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.421154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.421324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.421378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.421431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.421490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.601 [2024-07-24 16:57:59.421782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.571302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.571802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.572208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.572604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.575477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.576383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.577638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.578919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.580485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.580892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.581294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.582589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.584566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.586137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.587854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.589452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.590247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.590653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.591325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.592593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.594988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.596252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.861 [2024-07-24 16:57:59.597542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.599061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.599903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.600316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.601922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.603348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.605971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.607355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.608881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.610284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.611153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.612196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.613453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.614751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.617403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.618732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.620267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.620676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.621535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.623012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.624631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.626317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.628965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.630503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.631498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.631903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.633838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.635111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.636406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.637938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.640884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.642519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.642915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.643315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.644984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.646308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.647828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.649013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.651876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.652455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.652509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.652900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.654782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.656358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.656430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.658097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.659418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.660563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.660615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.661016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.661609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.662911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.662967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.664208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.665639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.666055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.666106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.666503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.666959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.668208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.668264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.669515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.670858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.671271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.671332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.671719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.672227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.673499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.673555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.673950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.675354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.675758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.675812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.676370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.676822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.678135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.678194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.679171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.680633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.681039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.681090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.682684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.683183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.683593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.683647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.684890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.686306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.687171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.687226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.688130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.688585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.862 [2024-07-24 16:57:59.689844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.689899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.690831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.692360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.694069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.694121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.695523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.696035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.697368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.697425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.699051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.700456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.701400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.701457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.702620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.703069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.704007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.704066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.704767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.706386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.707772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.707830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.708300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.708763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.710393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.710457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.710846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.712221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.713340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.713397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.714864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.715381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.716071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.716127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.716524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.717948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.718511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.718568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.719647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.720098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.720510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.720562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:02.863 [2024-07-24 16:57:59.720956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.722282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.723700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.723754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.724833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.725330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.725734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.725785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.726182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.727601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.728708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.728766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.730165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.730706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.731111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.731170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.732421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.733788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.734847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.734905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.735470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.736047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.736467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.736524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.738182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.739599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.741060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.741117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.741519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.742045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.742460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.742529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.742929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.744752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.745173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.745231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.745627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.746223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.746626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.746676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.747074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.748434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.748841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.748898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.749299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.749796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.750207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.750259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.750653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.752226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.125 [2024-07-24 16:57:59.752632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.752696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.753110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.753687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.754094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.754152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.754543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.755965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.756379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.756432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.756825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.757345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.757749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.757801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.758200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.759731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.760136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.760269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.760664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.761168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.761573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.761623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.762013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.763477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.763883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.763934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.764336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.764865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.765286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.765356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.765747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.767242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.767646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.767697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.768086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.768114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.768444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.768617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.769020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.769076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.769478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.769507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.769905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.771353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.771415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.771805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.771852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.772129] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:03.126 [2024-07-24 16:57:59.772669] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:03.126 [2024-07-24 16:57:59.772762] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:03.126 [2024-07-24 16:57:59.773181] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:03.126 [2024-07-24 16:57:59.773242] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:03.126 [2024-07-24 16:57:59.774608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.774667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.774712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.774757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.775191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.775372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.775426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.775486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.775531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.776899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.776969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.777750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.779882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.781401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.781464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.781523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.781573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.781866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.782038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.782093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.782145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.782192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.783612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.783675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.783721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.126 [2024-07-24 16:57:59.783766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.784119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.784295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.784355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.784401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.784445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.785807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.785879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.785925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.785970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.786295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.786467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.786526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.786571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.786622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.788963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.789011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.790381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.790441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.790487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.790533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.790968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.791160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.791215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.791260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.791306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.792616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.793373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.793431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.794832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.795171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.795347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.796981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.797033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.797446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.798942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.800595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.800651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.801098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.801402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.801575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.802670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.802724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.803116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.804568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.805192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.805247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.806734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.807035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.807217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.807619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.807669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.808061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.809357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.810179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.810236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.811638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.811972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.812153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.813666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.813718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.814260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.815690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.817296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.817352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.817872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.818184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.818355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.819369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.819420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.821114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.822739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.824290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.824356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.825917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.826303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.826478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.827907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.127 [2024-07-24 16:57:59.827964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.828978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.830348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.830753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.830825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.832385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.832685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.832854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.833269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.833325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.834736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.836092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.837691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.837753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.838150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.838477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.838647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.840143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.840206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.840611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.841953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.842905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.842958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.844111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.844461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.844632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.845515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.845571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.846981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.848367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.848877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.848931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.850067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.850419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.850585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.851406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.851460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.852128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.856312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.857956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.858033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.858436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.858733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.858901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.859313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.859367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.860623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.862011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.862641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.862701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.863866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.864229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.864396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.865226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.865279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.866337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.867642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.868890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.868943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.870532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.870967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.871147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.872651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.872703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.873092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.877474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.878758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.878812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.880184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.880486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.880651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.881403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.881457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.882352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.885963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.887518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.887572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.888371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.888674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.888840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.890428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.890497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.892016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.895415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.896767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.896821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.898182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.898510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.898675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.900319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.900381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.901295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.128 [2024-07-24 16:57:59.905341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.905746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.905800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.907212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.907601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.907769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.909074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.909128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.910414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.914568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.916153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.916208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.916911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.917219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.917387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.918400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.918454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.919086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.923178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.924925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.924983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.926665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.926963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.927129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.928537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.928589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.930095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.933918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.935244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.935299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.936888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.937277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.937441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.938732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.938786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.940089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.943039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.944654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.944715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.946373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.946675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.946848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.947991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.948044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.949322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.952820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.954363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.954432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.955901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.956210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.956379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.957609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.957661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.959124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.962652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.963329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.963382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.963429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.963773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.963939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.965101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.965160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.965206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.968563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.969980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.970156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.970314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.972022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.976952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.978596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.980047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.981361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.981667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.981732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.129 [2024-07-24 16:57:59.982296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.983783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.985434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.990165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.991071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.992349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.993640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.993943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.995128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.996596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.997864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:57:59.999168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.002033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.003571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.005249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.006847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.007163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.008074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.009354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.010637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.012158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.015000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.015436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.016939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.018276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.018606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.020281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.020776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.022156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.023685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.025625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.026909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.027461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.028596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.028929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.030354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.031891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.032667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.034359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.037966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.039209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.039737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.041367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.041679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.043196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.044728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.045313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.046541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.051511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.052062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.053084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.053963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.054280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.055439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.056337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.057529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.058820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.063364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.064526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.065417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.066581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.066887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.067435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.068903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.069312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.070517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.075345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.075818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.077230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.077285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.077720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.390 [2024-07-24 16:58:00.079321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.080327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.081164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.081222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.086808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.086872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.087271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.087323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.087635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.089390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.089460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.090910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.090960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.094840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.094904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.096579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.096635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.097047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.098572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.098636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.100025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.100080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.104226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.104290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.105232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.105283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.105630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.106932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.106995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.108112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.108182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.113758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.113828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.114229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.114291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.114597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.116178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.116248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.116646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.116699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.121550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.121621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.123047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.123098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.123482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.124780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.124844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.125773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.125836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.129135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.129207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.129604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.129659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.129964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.131746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.131815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.132824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.132880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.137106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.137177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.138167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.138222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.138577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.139545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.139609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.141094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.141151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.145848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.145931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.147417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.147480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.147791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.149047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.149108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.149563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.149613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.153650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.153715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.154876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.154932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.155365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.156740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.156801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.157647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.157699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.162389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.162455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.163696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.163748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.164055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.391 [2024-07-24 16:58:00.164579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.164650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.166303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.166354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.171793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.171864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.172266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.172321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.172625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.173135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.173207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.174465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.174520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.177525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.177590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.178726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.178778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.179188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.180724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.180788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.181194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.181244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.183834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.183899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.185333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.185618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.186001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.186528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.186632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.187135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.187197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.189922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.190005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.191578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.191629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.191992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.193358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.193420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.193814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.193875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.196390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.196463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.196858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.196913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.197320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.199005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.199077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.199481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.199535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.202067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.202132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.203821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.203878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.204293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.204810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.204874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.205279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.205335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.207826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.207906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.209100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.209159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.209584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.211179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.211249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.211645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.211693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.215054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.215136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.215538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.215593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.215927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.216988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.217052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.217697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.217749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.221273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.221337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.222218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.222270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.222644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.223166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.223236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.223632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.223695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.227814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.227887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.228307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.228363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.228670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.229360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.229425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.392 [2024-07-24 16:58:00.230394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.230444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.235623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.235688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.236082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.236135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.236529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.237047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.237120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.238664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.238714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.243542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.243607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.244282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.244335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.244642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.245438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.245501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.246397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.393 [2024-07-24 16:58:00.246448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.251005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.252309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.252869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.252923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.253240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.253761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.254910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.256097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.256164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.259975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.260040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.260086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.260131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.260491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.261152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.261216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.261263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.261308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.264925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.264985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.265030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.653 [2024-07-24 16:58:00.265075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.265429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.265605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.265660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.265706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.265768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.268705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.268776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.268827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.268872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.269187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.269369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.269423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.269469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.269515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.274295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.274387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.274437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.274490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.274792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.274965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.275020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.275066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.275111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.277865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.277926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.277970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.278022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.278335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.278504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.278559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.278604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.278662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.281875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.281935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.281983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.282028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.282418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.282592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.282647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.282705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.282754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.285786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.285845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.285891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.285937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.286310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.286481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.286541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.286587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.286631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.290680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.290739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.290784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.290828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.291211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.291382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.291437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.291483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.291536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.294930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.294992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.295037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.296550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.296985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.297168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.297243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.297289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.298627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.301885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.302815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.302869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.304012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.304399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.304575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.305946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.306001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.307049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.310863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.311346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.311402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.312566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.312906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.313074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.314057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.314114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.315394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.318637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.320083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.320144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.320540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.654 [2024-07-24 16:58:00.320850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.321018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.322344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.322400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.322823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.326978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.327984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.328037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.328967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.329305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.329476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.330595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.330652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.332105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.334901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.335318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.335382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.336919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.337263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.337437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.338066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.338120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.339147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.342056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.343154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.343210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.344465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.344797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.344969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.345624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.345680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.347116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.351166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.351950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.352001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.353121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.353519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.353691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.354575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.354631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.356023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.358927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.359580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.359637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.360949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.361270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.361441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.362640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.362697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.364014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.368848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.370123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.370184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.371513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.371819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.371986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.372973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.373026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.374717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.378059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.379198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.379253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.379778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.380085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.380261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.381601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.381655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.383064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.386390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.387536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.387589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.389319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.389765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.389933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.391379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.391431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.391917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.396534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.397822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.397880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.399173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.399482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.399648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.400490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.400544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.401341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.404782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.406352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.406407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.406945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.407264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.407433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.655 [2024-07-24 16:58:00.408764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.408819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.410368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.414662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.416395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.416455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.418148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.418457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.418637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.419958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.420010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.421394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.424726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.425477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.425532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.426422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.426735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.426900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.428200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.428255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.429517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.433226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.434665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.434718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.436093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.436550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.436720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.438458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.438513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.438902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.443379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.444657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.444711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.445994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.446308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.446477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.447079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.447132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.448302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.452344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.453898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.453953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.454920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.455236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.455403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.457070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.457131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.458840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.461910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.463098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.463160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.464419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.464780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.464947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.466547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.466608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.467451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.471360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.471772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.471827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.473304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.473702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.473869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.475171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.475226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.476515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.480516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.482090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.482155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.482910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.483229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.483399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.484461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.484516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.485093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.489235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.490788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.490845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.492313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.492630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.492798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.493965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.494018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.495640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.499930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.500001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.500048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.501608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.501917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.502085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.502152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.502200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.656 [2024-07-24 16:58:00.503901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.507163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.508251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.508998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.509919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.510262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.510433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.657 [2024-07-24 16:58:00.511740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.513278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.514029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.519456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.520316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.521317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.521966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.522307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.523738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.525266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.526462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.527734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.531495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.532754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.533189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.917 [2024-07-24 16:58:00.534609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.534922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.536735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.537524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.538799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.540008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.543832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.545031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.546453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.547106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.547447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.548932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.550244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.550777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.552054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.556675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.557811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.559277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.559679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.559989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.560514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.561514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.562740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.563707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.569329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.569742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.570892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.572058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.572410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.573721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.574916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.575728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.577429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.582760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.584446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.585578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.586253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.586567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.587085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.588618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.589011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.590718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.595379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.595829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.597230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.597626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.597938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.599527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.599933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.601381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.602808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.608885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.610326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.610736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.610790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.611115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.612910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.613777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.614724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.614777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.618880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.618945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.619851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.619906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.620253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.621989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.622057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.622460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.622512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.626983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.627054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.628685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.628747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.629085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.629945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.630008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.631325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.631377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.636048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.636114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.636759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.636815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.637124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.637699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.637763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.638862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.638912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.643525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.643592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.644753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.644805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.918 [2024-07-24 16:58:00.645236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.646800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.646866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.647273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.647323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.652015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.652082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.653284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.653334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.653643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.654663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.654727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.655749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.655801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.660667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.660735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.661134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.661205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.661511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.662023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.662098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.663643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.663706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.669951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.670016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.670419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.670471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.670783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.672561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.672631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.674029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.674081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.677777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.677849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.679510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.679561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.679962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.681597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.681667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.683185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.683246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.687614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.687680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.688500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.688552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.688886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.689412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.689481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.690292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.690346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.693444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.693512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.694761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.694810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.695129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.696302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.696364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.696763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.696828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.699439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.699520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.699916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.699976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.700404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.702101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.702170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.702636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.702686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.705237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.705304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.705699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.705754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.706095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.706614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.706678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.707073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.707125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.709723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.709791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.710201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.710262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.710635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.711159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.711223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.711618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.711681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.714504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.714571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.714972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.715025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.715440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.715953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.919 [2024-07-24 16:58:00.716021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.716426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.716488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.719201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.719271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.719668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.719724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.720109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.720673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.720740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.721136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.721213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.724094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.724167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.725835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.725885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.726216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.726783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.726846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.727981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.728039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.731937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.732009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.732931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.732983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.733381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.734620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.734686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.735096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.735158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.739854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.739921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.740572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.740634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.741039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.741564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.741626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.743206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.743268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.746879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.746947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.747384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.747437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.747776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.749596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.749665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.750643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.750694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.755152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.755220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.756535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.756586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.756920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.758230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.758294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.759095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.759154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.763578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.763646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.765081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.765135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.765516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.766598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.766660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.767764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.767817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.772432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.772498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.773427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.773481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.773787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.774366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.774430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.775554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:03.920 [2024-07-24 16:58:00.775603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.780552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.780960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.781365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.781421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.781756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.783067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.784552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.786104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.786175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.789830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.789896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.789942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.789987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.790338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.791657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.791721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.791766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.791820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.795975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.796046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.796097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.797614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.797673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.797731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.797779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.798087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.798275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.798334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.798380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.798425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.799913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.799980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.800822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.802263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.802324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.802369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.802420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.802787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.802958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.803019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.803082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.803153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.804615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.804676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.804734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.804780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.805086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.805274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.805334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.805380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.805425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.806922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.806996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.807832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.809278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.809337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.809386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.809431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.809791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.809965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.182 [2024-07-24 16:58:00.810020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.810065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.810110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.811591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.811650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.811695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.812090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.812456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.812647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.812704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.812775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.814269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.815756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.816177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.816231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.817664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.818004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.818184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.818963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.819014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.819412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.820893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.821311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.821369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.822696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.823007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.823189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.823603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.823658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.824049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.825418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.826979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.827033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.828449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.828792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.828960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.830432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.830489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.832059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.833493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.834761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.834817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.836074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.836428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.836596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.837319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.837374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.838653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.840060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.840476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.840528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.841034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.841351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.841519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.842834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.842888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.844190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.845576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.847108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.847170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.848593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.848909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.849082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.849495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.849551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.851065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.852518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.853846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.853900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.855071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.855402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.855570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.857217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.857270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.858832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.860307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.861902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.861956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.862353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.862662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.862831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.864258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.864312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.865648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.867001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.868454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.868509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.869987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.870340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.870507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.183 [2024-07-24 16:58:00.870909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.870959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.872131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.875909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.877488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.877546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.879037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.879357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.879524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.879928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.879979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.880393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.884337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.885597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.885651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.886923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.887254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.887420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.888491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.888544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.888935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.890352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.891944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.892001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.893493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.893805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.894007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.895564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.895629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.897264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.898631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.899763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.899818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.901104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.901464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.901632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.902954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.903008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.903731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.905091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.906173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.906227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.906617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.907035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.907215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.908574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.908628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.909941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.911253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.912692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.912746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.914251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.914588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.914757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.915174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.915226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.915616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.917005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.918326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.918381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.919056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.919381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.919551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.920852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.920907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.922216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.923910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.925493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.925555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.927064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.927383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.927556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.929006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.929059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.930535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.931979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.932399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.932452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.932841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.933161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.933328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.934604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.934659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.935930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.937279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.938600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.938655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.939943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.940267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.940448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.940850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.940900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.184 [2024-07-24 16:58:00.941301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.942692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.943952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.944009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.945582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.945923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.946091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.946503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.946557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.947726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.949085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.950210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.950267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.950769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.951183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.951352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.951827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.951880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.953074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.954421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.955770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.955828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.957258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.957646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.957809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.959534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.959602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.959991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.961417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.961482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.961527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.962502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.962815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.962983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.963043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.963089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.963495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.965027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.966339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.967001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.968396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.968709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.968877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.970618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.971013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.971411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.976186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.977135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.978035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.978943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.979305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.979915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.981236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.982915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.983484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.988001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.989680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.990227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.991513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.991825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.992344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.992748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.993253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.994590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.997633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.998044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.998444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.998931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:00.999255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.001052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.001688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.002911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.004655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.007134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.008733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.009517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.010577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.010890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.011411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.011814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.012559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.013654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.016389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.016796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.017196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.018029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.018377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.019914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.020941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.021880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.023174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.185 [2024-07-24 16:58:01.025628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.026785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.027996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.028946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.029281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.029794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.030203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.031400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.032049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.033799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.035067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.035650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.037087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.037408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.038105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.039463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.039860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.186 [2024-07-24 16:58:01.040259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.042724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.043941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.045228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.045282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.045617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.046346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.046749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.047157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.047215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.447 [2024-07-24 16:58:01.049531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.049600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.051127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.051184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.051567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.052079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.052147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.052806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.052856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.054673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.054738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.055129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.055185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.055505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.056023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.056092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.056503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.056558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.058343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.058411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.058800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.058849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.059237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.059751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.059814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.060223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.060306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.062477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.062543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.062933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.062982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.063426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.063943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.064007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.064415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.064471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.066275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.066347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.066743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.066792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.067155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.067666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.067736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.068144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.068205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.070904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.070969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.071382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.071433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.071813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.072333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.072398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.072789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.072837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.074706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.074770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.075170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.075220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.075557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.076066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.076131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.076540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.076595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.078358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.078425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.078817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.078865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.079223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.080806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.080876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.081286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.081347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.085236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.085308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.086325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.086376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.086769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.087961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.088025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.088427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.088492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.090439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.090505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.091850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.091910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.092227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.092741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.092801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.093202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.093252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.448 [2024-07-24 16:58:01.095945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.096010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.097172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.097234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.097617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.098123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.098192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.098673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.098724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.101057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.101123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.102341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.102397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.102826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.103346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.103408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.104578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.104632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.107284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.107350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.107745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.107794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.108103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.108620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.108683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.109928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.109983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.112391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.112461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.112851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.112904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.113326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.114821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.114884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.116281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.116336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.118081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.118152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.118544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.118594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.118944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.120262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.120326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.121282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.121333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.123113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.123184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.123579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.123644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.123955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.125817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.125880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.126546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.126601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.128389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.128452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.129719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.129775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.130121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.130796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.130865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.132495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.132545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.134439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.134520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.135698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.135752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.136120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.137439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.137516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.138752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.138807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.141912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.141978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.143660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.143713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.144048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.145357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.145421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.146504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.146554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.149345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.149410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.149943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.150001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.150314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.151984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.152052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.152455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.152506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.155198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.155263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.449 [2024-07-24 16:58:01.156275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.156329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.156695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.157611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.157676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.158066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.158114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.161106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.161180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.162249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.162301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.162663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.164336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.164399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.165041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.165090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.166926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.168497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.168899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.168954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.169353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.169875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.171513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.172989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.173047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.175932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.175996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.176042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.176087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.176456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.176972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.177032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.177077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.177126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.178585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.178645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.178706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.178752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.179150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.179326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.179380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.179425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.179470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.180814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.180883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.180928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.180973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.181287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.181464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.181520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.181566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.181611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.183855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.185465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.185532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.185578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.185632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.185938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.186108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.186173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.186227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.186280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.187654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.187713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.187763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.187808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.188116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.188293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.188348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.188394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.188438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.189786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.189847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.189892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.189937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.190286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.190452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.190507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.190552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.190604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.191972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.192031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.192076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.450 [2024-07-24 16:58:01.192122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.192434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.192605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.192663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.192709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.192754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.194109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.194175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.194221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.195481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.195848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.196017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.196071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.196117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.197647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.199167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.200465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.200519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.201798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.202110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.202288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.202971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.203025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.204457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.205763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.206181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.206234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.206632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.206944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.207116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.208724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.208784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.210438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.211840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.213160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.213214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.214747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.215112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.215284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.215687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.215736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.216486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.217827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.218698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.218752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.220332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.220642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.220807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.222375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.222429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.223677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.225122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.226554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.226608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.228053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.228368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.228535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.229529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.229583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.230839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.232202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.232608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.232658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.233272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.233594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.233763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.235078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.235133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.236654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.237994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.239679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.239733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.241081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.241487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.241651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.242052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.242102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.243662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.245029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.245938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.245992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.247267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.247624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.247788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.249355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.249408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.249814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.251251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.252579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.252634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.254174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.254525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.451 [2024-07-24 16:58:01.254691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.256345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.256399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.257964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.259313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.259719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.259769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.261175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.261509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.261674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.263077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.263131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.264868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.266213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.267759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.267813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.268469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.268866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.269030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.269515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.269570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.270843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.272154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.273686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.273748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.275279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.275606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.275769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.277413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.277467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.277855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.279236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.280276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.280334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.281914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.282294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.282461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.282937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.282992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.283388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.284762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.285505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.285561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.286480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.286791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.286969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.287378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.287429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.287830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.289205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.290847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.290898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.292378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.292764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.292930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.293340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.293391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.294195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.295577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.296522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.296579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.297563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.297991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.298169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.298579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.298628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.300293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.301695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.302798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.302856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.304404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.304712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.304878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.305306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.452 [2024-07-24 16:58:01.305371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.305761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.307147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.308698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.308760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.310226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.310584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.310750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.311157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.311209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.311698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.313064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.314256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.314312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.315350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.315799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.315965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.316374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.316424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.317858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.319259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.320813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.320875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.321275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.321589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.321756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.322400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.322457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.323609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.324911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.325850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.325913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.326312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.326734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.326907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.328471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.328534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.330006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.331353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.331758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.331808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.332208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.332576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.332743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.333905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.333961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.334970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.336361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.336426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.336471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.336861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.337220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.337389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.337449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.337494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.338649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.714 [2024-07-24 16:58:01.340001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.340883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.341285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.341680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.342031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.342209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.343409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.344031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.345455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.347380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.349007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.350355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.350848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.351167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.352701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.353112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.353515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.353912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.355654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.356061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.356464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.358128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.358472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.359326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.360999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.362161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.362824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.365834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.367365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.367767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.369459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.369773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.371615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.372021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.372422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.372818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.374746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.375162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.375564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.377152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.377465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.379207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.380882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.381702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.382971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.384654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.385072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.386288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.387494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.387845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.389168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.390378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.391202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.391609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.393496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.393900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.394305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.394722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.395158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.395673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.396074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.396483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.396882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.398671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.399083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.399488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.399884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.400304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.400823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.401239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.401638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.402033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.403827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.404251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.404657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.404713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.405104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.405625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.406035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.406449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.406505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.408428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.408494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.408884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.408932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.409342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.409861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.409929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.410335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.410391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.715 [2024-07-24 16:58:01.412156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.412234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.412632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.412683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.413021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.413543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.413614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.415198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.415248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.416977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.417041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.417897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.417950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.418290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.419359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.419421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.420850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.420906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.422700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.422768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.424201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.424278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.424588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.425392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.425455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.426635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.426689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.429442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.429510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.430677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.430730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.431127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.432911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.432980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.434646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.434707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.437300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.437365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.438604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.438655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.439010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.440323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.440386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.441092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.441153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.443742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.443808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.444871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.444927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.445316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.445830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.445891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.446307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.446360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.448818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.448885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.450235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.450303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.450742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.451265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.451326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.452916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.452973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.455783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.455856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.456266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.456318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.456667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.457653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.457715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.458630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.458684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.460871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.460948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.461353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.461426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.461839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.463618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.463680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.465175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.465236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.466994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.467060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.467464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.467519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.467831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.468856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.468918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.469917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.469976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.471720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.471792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.716 [2024-07-24 16:58:01.472192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.472246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.472557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.474254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.474333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.475083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.475135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.476867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.476933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.478099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.478157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.478530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.479551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.479615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.481330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.481379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.483269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.483336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.484702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.484762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.485073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.485947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.486010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.486929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.486983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.489329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.489394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.490322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.490376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.490718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.492496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.492565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.493955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.494009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.497072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.497144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.498569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.498644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.498956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.500155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.500217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.501473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.501525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.503556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.503618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.505231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.505282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.505588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.506164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.506227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.507335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.507392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.509083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.509170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.510684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.510736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.511044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.511781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.511848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.513381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.513432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.516740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.516813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.517218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.517274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.517579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.518092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.518163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.518562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.518616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.520984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.521050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.521647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.521701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.522086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.522605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.522669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.524055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.524104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.525758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.525828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.527472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.527542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.527849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.528405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.528468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.529794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.529846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.533236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.533307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.534960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.535009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.717 [2024-07-24 16:58:01.535326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.536989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.537050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.538537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.538587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.540308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.540713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.541493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.541546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.541906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.543340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.544668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.545756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.545810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.548576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.548646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.548694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.548740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.549187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.549706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.549769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.549815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.549865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.551977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.553331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.553402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.553447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.553495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.553868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.554044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.554100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.554154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.554227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.555682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.555744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.555792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.555837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.556170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.556340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.556395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.556440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.556484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.557795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.557855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.557900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.557950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.558291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.558459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.558513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.558568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.558622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.560809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.562983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.564335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.718 [2024-07-24 16:58:01.564396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.564444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.564490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.564797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.564964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.565020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.565070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.565115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.566435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.566495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.566550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.567977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.568298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.568470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.568525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.568570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.569870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.719 [2024-07-24 16:58:01.571345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.572978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.573040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.574730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.575041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.575225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.576890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.576946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.578254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.579641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.580239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.580296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.580689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.581045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.581224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.582516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.582570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.583855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.980 [2024-07-24 16:58:01.585193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.586901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.586962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.588639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.588948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.589115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.589536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.589593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.589986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.591376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.592678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.592733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.593410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.593737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.593904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.595186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.595240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.596521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.598011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.599712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.599773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.601496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.601808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.601977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.603651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.603705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.604941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.606424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.606853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.606908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.607311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.607646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.607814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.609084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.609143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.610420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.611745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.613434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.613489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.615089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.615427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.615598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.616004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.616060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.616469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.617823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.619149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.619206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.620209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.620549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.620717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.621122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.621186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.621579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.622986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.624234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.624300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.625800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.626201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.626370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.626781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.626831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.628054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.629515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.630849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.630907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.631308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.631670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.631837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.632547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.632603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.633538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.634983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.635804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.635861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.636262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.636657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.636827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.638599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.638651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.640309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.641743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.642162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.642221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.642613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.642925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.643094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.644021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.644077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.981 [2024-07-24 16:58:01.644888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.646353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.646759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.646809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.647385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.647712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.647881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.649314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.649380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.650526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.652013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.652430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.652482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.654144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.654482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.654649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.655120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.655181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.656363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.657765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.658765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.658820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.659732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.660076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.660251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.661787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.661849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.663043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.665146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.666661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.666724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.668343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.668684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.668850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.669868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.669924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.671251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.672669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.673596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.673657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.674473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.674784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.674954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.676319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.676376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.676768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.678386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.680104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.680160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.680938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.681287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.681455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.682712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.682769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.683173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.684614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.685440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.685498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.687176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.687509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.687678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.688082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.688137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.688536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.689947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.690862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.690918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.691831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.692162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.692333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.692743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.692798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.693209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.694641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.696254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.696309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.697463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.697820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.697988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.698415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.698472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.698863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.700266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.700678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.700733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.701124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.701503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.701673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.702955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.703008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.704299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.982 [2024-07-24 16:58:01.705681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.705745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.705790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.707294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.707687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.707856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.707916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.707961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.708365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.709700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.710109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.711568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.713169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.713523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.713692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.714098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.715184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.716434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.719483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.721146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.722630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.723952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.724303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.724820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.725238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.726174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.727696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.729994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.730426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.730824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.732039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.732446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.733627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.734373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.734774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.735181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.737052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.737471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.737870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.738278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.738629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.739146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.739554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.739955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.740361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.742291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.742704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.743112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.743519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.743966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.744544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.744954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.745380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.745781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.747669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.748084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.748496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.748897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.749333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.749847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.750262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.750665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.751062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.752990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.753414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.753814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.754220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.754593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.755105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.755516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.757052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.758328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.760130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.760548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.760946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.762631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.762958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.763481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.765021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.766662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.767065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.770128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.770545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.772013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.772084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.772400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.772914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.773330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.773772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.773828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.776388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.983 [2024-07-24 16:58:01.776454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.777313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.777375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.777796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.778320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.778382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.779862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.779919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.781755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.781822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.782231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.782283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.782595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.784418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.784488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.785355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.785406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.787209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.787278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.788603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.788654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.789013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.789919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.789982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.791596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.791688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.793480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.793545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.794841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.794896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.795211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.796115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.796184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.797094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.797155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.799479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.799545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.800457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.800513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.800885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.802614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.802684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.804017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.804073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.806950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.807021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.808668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.808728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.809066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.810095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.810165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.811272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.811327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.813753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.813819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.814558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.814615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.814920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.816446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.816511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.816909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.816962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.819978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.820051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.821016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.821068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.821437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.822647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.822710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.823123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.823183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.825426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.825492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.827063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.827150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.827460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.827973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.828036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.828442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.828497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.830921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.830985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.831919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.831973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.832328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.832841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.832910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.833312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.833364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.835539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.835605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.984 [2024-07-24 16:58:01.836538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.985 [2024-07-24 16:58:01.836593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.985 [2024-07-24 16:58:01.836900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.985 [2024-07-24 16:58:01.837883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.985 [2024-07-24 16:58:01.837954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.985 [2024-07-24 16:58:01.838353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:04.985 [2024-07-24 16:58:01.838415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.840283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.840362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.841869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.841924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.842244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.842758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.842822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.843228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.843284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.845281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.845348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.845745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.845801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.846219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.847495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.847559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.848825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.848876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.851756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.851824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.853112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.853175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.853585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.854094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.854169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.854564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.854619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.856481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.856547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.857860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.857912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.858228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.859429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.859493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.860767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.860822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.862664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.862730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.863127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.863189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.863586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.864100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.864175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.864574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.864628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.866893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.866959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.868286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.868338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.868751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.869272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.869341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.869735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.869786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.872385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.872463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.874003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.244 [2024-07-24 16:58:01.874062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.874380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.875993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.876064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.877609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.877659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.879156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.879248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:05.245 [2024-07-24 16:58:01.884275] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:05.503 00:41:05.503 Latency(us) 00:41:05.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:05.504 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x0 length 0x100 00:41:05.504 crypto_ram : 5.87 43.59 2.72 0.00 0.00 2852862.36 70883.74 2711198.11 00:41:05.504 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x100 length 0x100 00:41:05.504 crypto_ram : 5.83 43.89 2.74 0.00 0.00 2828093.03 79691.78 2711198.11 00:41:05.504 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x0 length 0x100 00:41:05.504 crypto_ram2 : 5.87 43.59 2.72 0.00 0.00 2751035.80 70464.31 2630667.47 00:41:05.504 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x100 length 0x100 00:41:05.504 crypto_ram2 : 5.83 43.88 2.74 0.00 0.00 2725786.42 79272.35 2523293.29 00:41:05.504 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x0 length 0x100 00:41:05.504 crypto_ram3 : 5.61 270.57 16.91 0.00 0.00 422304.20 67528.29 650955.98 00:41:05.504 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x100 length 0x100 00:41:05.504 crypto_ram3 : 5.58 281.87 17.62 0.00 0.00 405034.83 9856.61 650955.98 00:41:05.504 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x0 length 0x100 00:41:05.504 crypto_ram4 : 5.71 287.30 17.96 0.00 0.00 386122.68 20656.95 473117.49 00:41:05.504 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:05.504 Verification LBA range: start 0x100 length 0x100 00:41:05.504 crypto_ram4 : 5.70 298.36 18.65 0.00 0.00 371284.09 1212.42 499961.04 00:41:05.504 =================================================================================================================== 00:41:05.504 Total : 1313.05 82.07 0.00 0.00 724163.31 1212.42 2711198.11 00:41:08.790 00:41:08.790 real 0m12.929s 00:41:08.790 user 0m23.916s 00:41:08.790 sys 0m0.617s 00:41:08.790 16:58:05 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:08.790 16:58:05 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:41:08.790 ************************************ 00:41:08.790 END TEST bdev_verify_big_io 00:41:08.790 ************************************ 00:41:08.790 16:58:05 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:08.790 16:58:05 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:08.790 16:58:05 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:08.790 16:58:05 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:08.790 ************************************ 00:41:08.790 START TEST bdev_write_zeroes 00:41:08.790 ************************************ 00:41:08.790 16:58:05 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:08.790 [2024-07-24 16:58:05.259636] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:08.790 [2024-07-24 16:58:05.259752] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887901 ] 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:08.790 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:08.790 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:08.790 [2024-07-24 16:58:05.484237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.049 [2024-07-24 16:58:05.766872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.049 [2024-07-24 16:58:05.788633] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:41:09.049 [2024-07-24 16:58:05.796653] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:41:09.049 [2024-07-24 16:58:05.804667] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:41:09.616 [2024-07-24 16:58:06.190641] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:12.196 [2024-07-24 16:58:09.046488] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:12.196 [2024-07-24 16:58:09.046571] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:12.196 [2024-07-24 16:58:09.046590] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:12.196 [2024-07-24 16:58:09.054508] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:12.196 [2024-07-24 16:58:09.054549] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:12.196 [2024-07-24 16:58:09.054565] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:12.456 [2024-07-24 16:58:09.062548] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:12.456 [2024-07-24 16:58:09.062584] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:12.456 [2024-07-24 16:58:09.062599] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:12.456 [2024-07-24 16:58:09.070541] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:12.456 [2024-07-24 16:58:09.070575] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:12.456 [2024-07-24 16:58:09.070590] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:12.714 Running I/O for 1 seconds... 00:41:13.649 00:41:13.649 Latency(us) 00:41:13.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:13.649 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:13.649 crypto_ram : 1.03 1858.18 7.26 0.00 0.00 68298.04 6265.24 82627.79 00:41:13.649 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:13.649 crypto_ram2 : 1.03 1863.97 7.28 0.00 0.00 67670.08 6081.74 76336.33 00:41:13.649 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:13.649 crypto_ram3 : 1.02 14260.13 55.70 0.00 0.00 8826.52 2647.65 11586.76 00:41:13.649 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:13.649 crypto_ram4 : 1.02 14246.07 55.65 0.00 0.00 8794.78 2647.65 9804.19 00:41:13.649 =================================================================================================================== 00:41:13.649 Total : 32228.35 125.89 0.00 0.00 15678.28 2647.65 82627.79 00:41:16.178 00:41:16.178 real 0m7.638s 00:41:16.178 user 0m7.032s 00:41:16.178 sys 0m0.547s 00:41:16.178 16:58:12 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:16.178 16:58:12 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:41:16.178 ************************************ 00:41:16.178 END TEST bdev_write_zeroes 00:41:16.178 ************************************ 00:41:16.178 16:58:12 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:16.178 16:58:12 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:16.178 16:58:12 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:16.178 16:58:12 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:16.178 ************************************ 00:41:16.178 START TEST bdev_json_nonenclosed 00:41:16.178 ************************************ 00:41:16.178 16:58:12 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:16.178 [2024-07-24 16:58:12.983572] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:16.178 [2024-07-24 16:58:12.983686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889218 ] 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:16.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:16.435 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:16.435 [2024-07-24 16:58:13.208564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.698 [2024-07-24 16:58:13.488713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.698 [2024-07-24 16:58:13.488802] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:41:16.698 [2024-07-24 16:58:13.488830] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:16.698 [2024-07-24 16:58:13.488845] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:17.267 00:41:17.267 real 0m1.198s 00:41:17.267 user 0m0.922s 00:41:17.267 sys 0m0.268s 00:41:17.267 16:58:14 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:17.267 16:58:14 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:41:17.267 ************************************ 00:41:17.267 END TEST bdev_json_nonenclosed 00:41:17.267 ************************************ 00:41:17.267 16:58:14 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:17.267 16:58:14 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:17.267 16:58:14 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:17.267 16:58:14 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:17.527 ************************************ 00:41:17.527 START TEST bdev_json_nonarray 00:41:17.527 ************************************ 00:41:17.527 16:58:14 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:17.527 [2024-07-24 16:58:14.248485] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:17.527 [2024-07-24 16:58:14.248601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889495 ] 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:17.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.527 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:17.786 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:17.786 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:17.786 [2024-07-24 16:58:14.474681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.044 [2024-07-24 16:58:14.763670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.044 [2024-07-24 16:58:14.763768] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:41:18.044 [2024-07-24 16:58:14.763796] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:18.044 [2024-07-24 16:58:14.763811] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:18.610 00:41:18.610 real 0m1.184s 00:41:18.610 user 0m0.906s 00:41:18.610 sys 0m0.271s 00:41:18.610 16:58:15 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:18.610 16:58:15 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:41:18.610 ************************************ 00:41:18.610 END TEST bdev_json_nonarray 00:41:18.610 ************************************ 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:41:18.610 16:58:15 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:41:18.610 00:41:18.610 real 1m48.490s 00:41:18.611 user 3m44.818s 00:41:18.611 sys 0m11.169s 00:41:18.611 16:58:15 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:18.611 16:58:15 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:18.611 ************************************ 00:41:18.611 END TEST blockdev_crypto_aesni 00:41:18.611 ************************************ 00:41:18.611 16:58:15 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:41:18.611 16:58:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:18.611 16:58:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:18.611 16:58:15 -- common/autotest_common.sh@10 -- # set +x 00:41:18.611 ************************************ 00:41:18.611 START TEST blockdev_crypto_sw 00:41:18.611 ************************************ 00:41:18.611 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:41:18.869 * Looking for test storage... 00:41:18.869 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1889809 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:41:18.869 16:58:15 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 1889809 00:41:18.869 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 1889809 ']' 00:41:18.869 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.869 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:18.869 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.869 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:18.869 16:58:15 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:18.869 [2024-07-24 16:58:15.707041] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:18.869 [2024-07-24 16:58:15.707167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889809 ] 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:19.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:19.129 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:19.129 [2024-07-24 16:58:15.933613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.386 [2024-07-24 16:58:16.214596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:20.319 16:58:16 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:20.319 16:58:16 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:41:20.319 16:58:16 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:41:20.319 16:58:16 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:41:20.319 16:58:16 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:41:20.319 16:58:16 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:20.319 16:58:16 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:21.253 Malloc0 00:41:21.253 Malloc1 00:41:21.253 true 00:41:21.253 true 00:41:21.511 true 00:41:21.511 [2024-07-24 16:58:18.121903] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:21.511 crypto_ram 00:41:21.511 [2024-07-24 16:58:18.129915] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:21.511 crypto_ram2 00:41:21.511 [2024-07-24 16:58:18.137958] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:21.511 crypto_ram3 00:41:21.511 [ 00:41:21.511 { 00:41:21.511 "name": "Malloc1", 00:41:21.511 "aliases": [ 00:41:21.511 "8030a17d-9fb4-436d-aa95-59e47a4e9a5d" 00:41:21.511 ], 00:41:21.511 "product_name": "Malloc disk", 00:41:21.511 "block_size": 4096, 00:41:21.511 "num_blocks": 4096, 00:41:21.511 "uuid": "8030a17d-9fb4-436d-aa95-59e47a4e9a5d", 00:41:21.511 "assigned_rate_limits": { 00:41:21.511 "rw_ios_per_sec": 0, 00:41:21.511 "rw_mbytes_per_sec": 0, 00:41:21.511 "r_mbytes_per_sec": 0, 00:41:21.511 "w_mbytes_per_sec": 0 00:41:21.511 }, 00:41:21.511 "claimed": true, 00:41:21.511 "claim_type": "exclusive_write", 00:41:21.511 "zoned": false, 00:41:21.511 "supported_io_types": { 00:41:21.511 "read": true, 00:41:21.511 "write": true, 00:41:21.511 "unmap": true, 00:41:21.511 "flush": true, 00:41:21.511 "reset": true, 00:41:21.511 "nvme_admin": false, 00:41:21.511 "nvme_io": false, 00:41:21.511 "nvme_io_md": false, 00:41:21.511 "write_zeroes": true, 00:41:21.511 "zcopy": true, 00:41:21.511 "get_zone_info": false, 00:41:21.511 "zone_management": false, 00:41:21.511 "zone_append": false, 00:41:21.511 "compare": false, 00:41:21.511 "compare_and_write": false, 00:41:21.511 "abort": true, 00:41:21.511 "seek_hole": false, 00:41:21.511 "seek_data": false, 00:41:21.511 "copy": true, 00:41:21.511 "nvme_iov_md": false 00:41:21.511 }, 00:41:21.511 "memory_domains": [ 00:41:21.511 { 00:41:21.511 "dma_device_id": "system", 00:41:21.511 "dma_device_type": 1 00:41:21.511 }, 00:41:21.511 { 00:41:21.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:21.511 "dma_device_type": 2 00:41:21.511 } 00:41:21.511 ], 00:41:21.511 "driver_specific": {} 00:41:21.511 } 00:41:21.511 ] 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8b83c29c-7d4d-5cc0-a1ee-85ef408a04aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8b83c29c-7d4d-5cc0-a1ee-85ef408a04aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "4ed42b71-6d64-578c-a0a5-43b1f1a9ae02"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "4ed42b71-6d64-578c-a0a5-43b1f1a9ae02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:41:21.511 16:58:18 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 1889809 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 1889809 ']' 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 1889809 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:21.511 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1889809 00:41:21.768 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:21.768 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:21.768 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1889809' 00:41:21.768 killing process with pid 1889809 00:41:21.768 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 1889809 00:41:21.768 16:58:18 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 1889809 00:41:25.051 16:58:21 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:25.051 16:58:21 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:41:25.051 16:58:21 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:41:25.051 16:58:21 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:25.051 16:58:21 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:25.051 ************************************ 00:41:25.051 START TEST bdev_hello_world 00:41:25.051 ************************************ 00:41:25.051 16:58:21 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:41:25.310 [2024-07-24 16:58:21.969104] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:25.310 [2024-07-24 16:58:21.969224] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890740 ] 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:25.310 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:25.310 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:25.568 [2024-07-24 16:58:22.196085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.826 [2024-07-24 16:58:22.483857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:26.393 [2024-07-24 16:58:23.018707] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:26.393 [2024-07-24 16:58:23.018782] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:26.393 [2024-07-24 16:58:23.018802] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:26.393 [2024-07-24 16:58:23.026718] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:26.393 [2024-07-24 16:58:23.026756] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:26.393 [2024-07-24 16:58:23.026772] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:26.393 [2024-07-24 16:58:23.034738] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:26.393 [2024-07-24 16:58:23.034771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:26.393 [2024-07-24 16:58:23.034786] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:26.393 [2024-07-24 16:58:23.115860] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:41:26.393 [2024-07-24 16:58:23.115895] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:41:26.393 [2024-07-24 16:58:23.115921] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:41:26.393 [2024-07-24 16:58:23.118171] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:41:26.393 [2024-07-24 16:58:23.118279] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:41:26.393 [2024-07-24 16:58:23.118301] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:41:26.393 [2024-07-24 16:58:23.118340] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:41:26.393 00:41:26.393 [2024-07-24 16:58:23.118364] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:41:28.370 00:41:28.370 real 0m3.055s 00:41:28.370 user 0m2.652s 00:41:28.370 sys 0m0.377s 00:41:28.370 16:58:24 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:28.370 16:58:24 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:41:28.370 ************************************ 00:41:28.370 END TEST bdev_hello_world 00:41:28.370 ************************************ 00:41:28.370 16:58:24 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:41:28.370 16:58:24 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:28.370 16:58:24 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:28.370 16:58:24 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:28.370 ************************************ 00:41:28.370 START TEST bdev_bounds 00:41:28.370 ************************************ 00:41:28.370 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:41:28.370 16:58:25 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1891220 00:41:28.370 16:58:25 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1891220' 00:41:28.371 Process bdevio pid: 1891220 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1891220 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1891220 ']' 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:28.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:28.371 16:58:25 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:28.371 [2024-07-24 16:58:25.108955] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:28.371 [2024-07-24 16:58:25.109070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891220 ] 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:28.630 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:28.630 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:28.630 [2024-07-24 16:58:25.333646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:28.888 [2024-07-24 16:58:25.621382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:28.888 [2024-07-24 16:58:25.621452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:28.888 [2024-07-24 16:58:25.621453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:29.454 [2024-07-24 16:58:26.182323] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:29.454 [2024-07-24 16:58:26.182408] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:29.454 [2024-07-24 16:58:26.182428] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:29.454 [2024-07-24 16:58:26.190334] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:29.454 [2024-07-24 16:58:26.190374] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:29.454 [2024-07-24 16:58:26.190392] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:29.454 [2024-07-24 16:58:26.198362] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:29.454 [2024-07-24 16:58:26.198399] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:29.454 [2024-07-24 16:58:26.198415] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:29.711 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:29.711 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:41:29.712 16:58:26 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:41:29.712 I/O targets: 00:41:29.712 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:41:29.712 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:41:29.712 00:41:29.712 00:41:29.712 CUnit - A unit testing framework for C - Version 2.1-3 00:41:29.712 http://cunit.sourceforge.net/ 00:41:29.712 00:41:29.712 00:41:29.712 Suite: bdevio tests on: crypto_ram3 00:41:29.712 Test: blockdev write read block ...passed 00:41:29.712 Test: blockdev write zeroes read block ...passed 00:41:29.712 Test: blockdev write zeroes read no split ...passed 00:41:29.712 Test: blockdev write zeroes read split ...passed 00:41:29.712 Test: blockdev write zeroes read split partial ...passed 00:41:29.712 Test: blockdev reset ...passed 00:41:29.712 Test: blockdev write read 8 blocks ...passed 00:41:29.712 Test: blockdev write read size > 128k ...passed 00:41:29.712 Test: blockdev write read invalid size ...passed 00:41:29.712 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:29.712 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:29.712 Test: blockdev write read max offset ...passed 00:41:29.712 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:29.712 Test: blockdev writev readv 8 blocks ...passed 00:41:29.712 Test: blockdev writev readv 30 x 1block ...passed 00:41:29.712 Test: blockdev writev readv block ...passed 00:41:29.712 Test: blockdev writev readv size > 128k ...passed 00:41:29.712 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:29.712 Test: blockdev comparev and writev ...passed 00:41:29.712 Test: blockdev nvme passthru rw ...passed 00:41:29.712 Test: blockdev nvme passthru vendor specific ...passed 00:41:29.712 Test: blockdev nvme admin passthru ...passed 00:41:29.712 Test: blockdev copy ...passed 00:41:29.712 Suite: bdevio tests on: crypto_ram 00:41:29.712 Test: blockdev write read block ...passed 00:41:29.712 Test: blockdev write zeroes read block ...passed 00:41:29.712 Test: blockdev write zeroes read no split ...passed 00:41:29.712 Test: blockdev write zeroes read split ...passed 00:41:29.970 Test: blockdev write zeroes read split partial ...passed 00:41:29.970 Test: blockdev reset ...passed 00:41:29.970 Test: blockdev write read 8 blocks ...passed 00:41:29.970 Test: blockdev write read size > 128k ...passed 00:41:29.970 Test: blockdev write read invalid size ...passed 00:41:29.970 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:29.970 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:29.970 Test: blockdev write read max offset ...passed 00:41:29.970 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:29.970 Test: blockdev writev readv 8 blocks ...passed 00:41:29.970 Test: blockdev writev readv 30 x 1block ...passed 00:41:29.970 Test: blockdev writev readv block ...passed 00:41:29.970 Test: blockdev writev readv size > 128k ...passed 00:41:29.970 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:29.970 Test: blockdev comparev and writev ...passed 00:41:29.970 Test: blockdev nvme passthru rw ...passed 00:41:29.970 Test: blockdev nvme passthru vendor specific ...passed 00:41:29.970 Test: blockdev nvme admin passthru ...passed 00:41:29.970 Test: blockdev copy ...passed 00:41:29.970 00:41:29.970 Run Summary: Type Total Ran Passed Failed Inactive 00:41:29.970 suites 2 2 n/a 0 0 00:41:29.970 tests 46 46 46 0 0 00:41:29.970 asserts 260 260 260 0 n/a 00:41:29.970 00:41:29.970 Elapsed time = 0.544 seconds 00:41:29.970 0 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1891220 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1891220 ']' 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1891220 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1891220 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1891220' 00:41:29.970 killing process with pid 1891220 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1891220 00:41:29.970 16:58:26 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1891220 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:41:31.870 00:41:31.870 real 0m3.506s 00:41:31.870 user 0m8.066s 00:41:31.870 sys 0m0.514s 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:31.870 ************************************ 00:41:31.870 END TEST bdev_bounds 00:41:31.870 ************************************ 00:41:31.870 16:58:28 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:41:31.870 16:58:28 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:41:31.870 16:58:28 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:31.870 16:58:28 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:31.870 ************************************ 00:41:31.870 START TEST bdev_nbd 00:41:31.870 ************************************ 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1891899 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1891899 /var/tmp/spdk-nbd.sock 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1891899 ']' 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:31.870 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:31.871 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:31.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:31.871 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:31.871 16:58:28 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:31.871 [2024-07-24 16:58:28.711690] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:41:31.871 [2024-07-24 16:58:28.711810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.128 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.128 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:32.128 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.128 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:32.128 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.128 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:32.128 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.128 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:32.128 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.128 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:32.129 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.129 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:32.129 [2024-07-24 16:58:28.939746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.387 [2024-07-24 16:58:29.226928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:33.323 [2024-07-24 16:58:29.823273] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:33.323 [2024-07-24 16:58:29.823353] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:33.323 [2024-07-24 16:58:29.823374] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:33.323 [2024-07-24 16:58:29.831306] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:33.323 [2024-07-24 16:58:29.831347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:33.323 [2024-07-24 16:58:29.831365] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:33.323 [2024-07-24 16:58:29.839338] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:33.323 [2024-07-24 16:58:29.839375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:33.323 [2024-07-24 16:58:29.839391] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:33.323 16:58:29 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:33.583 1+0 records in 00:41:33.583 1+0 records out 00:41:33.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245742 s, 16.7 MB/s 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:33.583 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:33.842 1+0 records in 00:41:33.842 1+0 records out 00:41:33.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363792 s, 11.3 MB/s 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:33.842 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:41:34.102 { 00:41:34.102 "nbd_device": "/dev/nbd0", 00:41:34.102 "bdev_name": "crypto_ram" 00:41:34.102 }, 00:41:34.102 { 00:41:34.102 "nbd_device": "/dev/nbd1", 00:41:34.102 "bdev_name": "crypto_ram3" 00:41:34.102 } 00:41:34.102 ]' 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:41:34.102 { 00:41:34.102 "nbd_device": "/dev/nbd0", 00:41:34.102 "bdev_name": "crypto_ram" 00:41:34.102 }, 00:41:34.102 { 00:41:34.102 "nbd_device": "/dev/nbd1", 00:41:34.102 "bdev_name": "crypto_ram3" 00:41:34.102 } 00:41:34.102 ]' 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:34.102 16:58:30 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:34.361 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:34.621 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:41:34.880 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:34.881 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:41:35.140 /dev/nbd0 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:35.140 1+0 records in 00:41:35.140 1+0 records out 00:41:35.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265762 s, 15.4 MB/s 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:35.140 16:58:31 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:41:35.400 /dev/nbd1 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:35.400 1+0 records in 00:41:35.400 1+0 records out 00:41:35.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321932 s, 12.7 MB/s 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:35.400 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:35.660 { 00:41:35.660 "nbd_device": "/dev/nbd0", 00:41:35.660 "bdev_name": "crypto_ram" 00:41:35.660 }, 00:41:35.660 { 00:41:35.660 "nbd_device": "/dev/nbd1", 00:41:35.660 "bdev_name": "crypto_ram3" 00:41:35.660 } 00:41:35.660 ]' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:35.660 { 00:41:35.660 "nbd_device": "/dev/nbd0", 00:41:35.660 "bdev_name": "crypto_ram" 00:41:35.660 }, 00:41:35.660 { 00:41:35.660 "nbd_device": "/dev/nbd1", 00:41:35.660 "bdev_name": "crypto_ram3" 00:41:35.660 } 00:41:35.660 ]' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:35.660 /dev/nbd1' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:35.660 /dev/nbd1' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:41:35.660 256+0 records in 00:41:35.660 256+0 records out 00:41:35.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106342 s, 98.6 MB/s 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:35.660 256+0 records in 00:41:35.660 256+0 records out 00:41:35.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223171 s, 47.0 MB/s 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:35.660 256+0 records in 00:41:35.660 256+0 records out 00:41:35.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0432448 s, 24.2 MB/s 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:35.660 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:35.919 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:36.178 16:58:32 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:41:36.437 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:41:36.701 malloc_lvol_verify 00:41:36.701 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:41:36.967 449aca4a-bf20-45d8-91f4-8ef045adc254 00:41:36.967 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:41:37.226 ad2ee153-0d6f-4347-90ad-e07f660d1cb4 00:41:37.226 16:58:33 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:41:37.485 /dev/nbd0 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:41:37.485 mke2fs 1.46.5 (30-Dec-2021) 00:41:37.485 Discarding device blocks: 0/4096 done 00:41:37.485 Creating filesystem with 4096 1k blocks and 1024 inodes 00:41:37.485 00:41:37.485 Allocating group tables: 0/1 done 00:41:37.485 Writing inode tables: 0/1 done 00:41:37.485 Creating journal (1024 blocks): done 00:41:37.485 Writing superblocks and filesystem accounting information: 0/1 done 00:41:37.485 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:37.485 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:37.486 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:37.486 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1891899 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1891899 ']' 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1891899 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1891899 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1891899' 00:41:37.745 killing process with pid 1891899 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1891899 00:41:37.745 16:58:34 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1891899 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:41:39.652 00:41:39.652 real 0m7.772s 00:41:39.652 user 0m9.808s 00:41:39.652 sys 0m2.505s 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:39.652 ************************************ 00:41:39.652 END TEST bdev_nbd 00:41:39.652 ************************************ 00:41:39.652 16:58:36 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:41:39.652 16:58:36 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:41:39.652 16:58:36 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:41:39.652 16:58:36 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:41:39.652 16:58:36 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:39.652 16:58:36 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:39.652 16:58:36 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:39.652 ************************************ 00:41:39.652 START TEST bdev_fio 00:41:39.652 ************************************ 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:41:39.652 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:39.652 16:58:36 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:39.911 ************************************ 00:41:39.911 START TEST bdev_fio_rw_verify 00:41:39.911 ************************************ 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:39.911 16:58:36 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:40.170 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:40.170 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:40.170 fio-3.35 00:41:40.170 Starting 2 threads 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.430 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:40.430 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:40.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:40.431 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:52.692 00:41:52.692 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1893578: Wed Jul 24 16:58:48 2024 00:41:52.692 read: IOPS=22.1k, BW=86.2MiB/s (90.4MB/s)(862MiB/10001msec) 00:41:52.692 slat (usec): min=14, max=325, avg=20.17, stdev= 3.77 00:41:52.692 clat (usec): min=7, max=564, avg=144.77, stdev=58.30 00:41:52.692 lat (usec): min=25, max=585, avg=164.94, stdev=59.74 00:41:52.692 clat percentiles (usec): 00:41:52.692 | 50.000th=[ 143], 99.000th=[ 277], 99.900th=[ 302], 99.990th=[ 367], 00:41:52.692 | 99.999th=[ 510] 00:41:52.692 write: IOPS=26.5k, BW=104MiB/s (109MB/s)(983MiB/9492msec); 0 zone resets 00:41:52.692 slat (usec): min=14, max=296, avg=33.78, stdev= 5.08 00:41:52.692 clat (usec): min=24, max=889, avg=193.76, stdev=89.53 00:41:52.692 lat (usec): min=50, max=1031, avg=227.54, stdev=91.24 00:41:52.692 clat percentiles (usec): 00:41:52.692 | 50.000th=[ 188], 99.000th=[ 388], 99.900th=[ 416], 99.990th=[ 578], 00:41:52.692 | 99.999th=[ 840] 00:41:52.692 bw ( KiB/s): min=93744, max=106976, per=95.04%, avg=100763.37, stdev=1947.62, samples=38 00:41:52.692 iops : min=23436, max=26744, avg=25190.84, stdev=486.91, samples=38 00:41:52.692 lat (usec) : 10=0.01%, 20=0.01%, 50=5.09%, 100=14.99%, 250=63.13% 00:41:52.692 lat (usec) : 500=16.76%, 750=0.01%, 1000=0.01% 00:41:52.692 cpu : usr=99.19%, sys=0.38%, ctx=32, majf=0, minf=19331 00:41:52.692 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:52.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.692 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:52.692 issued rwts: total=220784,251599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:52.692 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:52.692 00:41:52.692 Run status group 0 (all jobs): 00:41:52.692 READ: bw=86.2MiB/s (90.4MB/s), 86.2MiB/s-86.2MiB/s (90.4MB/s-90.4MB/s), io=862MiB (904MB), run=10001-10001msec 00:41:52.692 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=983MiB (1031MB), run=9492-9492msec 00:41:53.260 ----------------------------------------------------- 00:41:53.260 Suppressions used: 00:41:53.260 count bytes template 00:41:53.260 2 23 /usr/src/fio/parse.c 00:41:53.260 1067 102432 /usr/src/fio/iolog.c 00:41:53.260 1 8 libtcmalloc_minimal.so 00:41:53.260 1 904 libcrypto.so 00:41:53.260 ----------------------------------------------------- 00:41:53.260 00:41:53.260 00:41:53.260 real 0m13.541s 00:41:53.260 user 0m34.164s 00:41:53.260 sys 0m0.770s 00:41:53.260 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:53.261 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:41:53.261 ************************************ 00:41:53.261 END TEST bdev_fio_rw_verify 00:41:53.261 ************************************ 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:41:53.520 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8b83c29c-7d4d-5cc0-a1ee-85ef408a04aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8b83c29c-7d4d-5cc0-a1ee-85ef408a04aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "4ed42b71-6d64-578c-a0a5-43b1f1a9ae02"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "4ed42b71-6d64-578c-a0a5-43b1f1a9ae02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:41:53.521 crypto_ram3 ]] 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "8b83c29c-7d4d-5cc0-a1ee-85ef408a04aa"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8b83c29c-7d4d-5cc0-a1ee-85ef408a04aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "4ed42b71-6d64-578c-a0a5-43b1f1a9ae02"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "4ed42b71-6d64-578c-a0a5-43b1f1a9ae02",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:53.521 ************************************ 00:41:53.521 START TEST bdev_fio_trim 00:41:53.521 ************************************ 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:53.521 16:58:50 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:54.107 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:54.107 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:54.107 fio-3.35 00:41:54.107 Starting 2 threads 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:54.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.107 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:06.315 00:42:06.315 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=1895962: Wed Jul 24 16:59:01 2024 00:42:06.315 write: IOPS=25.7k, BW=100MiB/s (105MB/s)(1004MiB/10001msec); 0 zone resets 00:42:06.315 slat (usec): min=18, max=357, avg=34.09, stdev=10.06 00:42:06.315 clat (usec): min=62, max=720, avg=255.93, stdev=61.44 00:42:06.315 lat (usec): min=81, max=788, avg=290.02, stdev=58.68 00:42:06.315 clat percentiles (usec): 00:42:06.315 | 50.000th=[ 265], 99.000th=[ 355], 99.900th=[ 379], 99.990th=[ 594], 00:42:06.315 | 99.999th=[ 685] 00:42:06.315 bw ( KiB/s): min=101448, max=103304, per=100.00%, avg=102830.74, stdev=211.37, samples=38 00:42:06.315 iops : min=25362, max=25826, avg=25707.68, stdev=52.84, samples=38 00:42:06.315 trim: IOPS=25.7k, BW=100MiB/s (105MB/s)(1004MiB/10001msec); 0 zone resets 00:42:06.315 slat (usec): min=7, max=152, avg=15.43, stdev= 5.15 00:42:06.315 clat (usec): min=44, max=789, avg=170.56, stdev=96.41 00:42:06.315 lat (usec): min=52, max=873, avg=185.99, stdev=99.69 00:42:06.315 clat percentiles (usec): 00:42:06.315 | 50.000th=[ 139], 99.000th=[ 388], 99.900th=[ 404], 99.990th=[ 429], 00:42:06.315 | 99.999th=[ 750] 00:42:06.315 bw ( KiB/s): min=101472, max=103304, per=100.00%, avg=102832.42, stdev=208.41, samples=38 00:42:06.315 iops : min=25368, max=25826, avg=25708.11, stdev=52.10, samples=38 00:42:06.315 lat (usec) : 50=0.31%, 100=13.72%, 250=44.40%, 500=41.55%, 750=0.01% 00:42:06.315 lat (usec) : 1000=0.01% 00:42:06.315 cpu : usr=99.39%, sys=0.04%, ctx=34, majf=0, minf=2110 00:42:06.315 IO depths : 1=5.0%, 2=13.7%, 4=65.1%, 8=16.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.315 complete : 0=0.0%, 4=86.0%, 8=14.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.315 issued rwts: total=0,256900,256901,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.315 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.315 00:42:06.315 Run status group 0 (all jobs): 00:42:06.315 WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=1004MiB (1052MB), run=10001-10001msec 00:42:06.315 TRIM: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=1004MiB (1052MB), run=10001-10001msec 00:42:06.881 ----------------------------------------------------- 00:42:06.881 Suppressions used: 00:42:06.881 count bytes template 00:42:06.881 2 23 /usr/src/fio/parse.c 00:42:06.881 1 8 libtcmalloc_minimal.so 00:42:06.881 1 904 libcrypto.so 00:42:06.881 ----------------------------------------------------- 00:42:06.881 00:42:06.881 00:42:06.881 real 0m13.363s 00:42:06.881 user 0m33.979s 00:42:06.881 sys 0m0.641s 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:42:06.881 ************************************ 00:42:06.881 END TEST bdev_fio_trim 00:42:06.881 ************************************ 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:42:06.881 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:42:06.881 00:42:06.881 real 0m27.252s 00:42:06.881 user 1m8.317s 00:42:06.881 sys 0m1.605s 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:06.881 16:59:03 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:06.881 ************************************ 00:42:06.881 END TEST bdev_fio 00:42:06.881 ************************************ 00:42:06.881 16:59:03 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:06.882 16:59:03 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:06.882 16:59:03 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:42:06.882 16:59:03 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:06.882 16:59:03 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:07.139 ************************************ 00:42:07.139 START TEST bdev_verify 00:42:07.139 ************************************ 00:42:07.139 16:59:03 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:07.139 [2024-07-24 16:59:03.881665] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:07.139 [2024-07-24 16:59:03.881779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897943 ] 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:07.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.398 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:07.398 [2024-07-24 16:59:04.109707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:07.657 [2024-07-24 16:59:04.393927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:07.657 [2024-07-24 16:59:04.393932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:08.222 [2024-07-24 16:59:04.991149] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:08.222 [2024-07-24 16:59:04.991220] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:08.222 [2024-07-24 16:59:04.991246] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:08.222 [2024-07-24 16:59:04.999156] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:08.222 [2024-07-24 16:59:04.999197] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:08.222 [2024-07-24 16:59:04.999214] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:08.222 [2024-07-24 16:59:05.007188] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:08.222 [2024-07-24 16:59:05.007227] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:08.222 [2024-07-24 16:59:05.007242] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:08.479 Running I/O for 5 seconds... 00:42:13.743 00:42:13.743 Latency(us) 00:42:13.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:13.743 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:13.743 Verification LBA range: start 0x0 length 0x800 00:42:13.744 crypto_ram : 5.01 6083.33 23.76 0.00 0.00 20955.32 1677.72 29779.56 00:42:13.744 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:13.744 Verification LBA range: start 0x800 length 0x800 00:42:13.744 crypto_ram : 5.02 6117.30 23.90 0.00 0.00 20840.26 1821.90 29569.84 00:42:13.744 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:13.744 Verification LBA range: start 0x0 length 0x800 00:42:13.744 crypto_ram3 : 5.03 3056.67 11.94 0.00 0.00 41621.33 2464.15 34603.01 00:42:13.744 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:13.744 Verification LBA range: start 0x800 length 0x800 00:42:13.744 crypto_ram3 : 5.02 3057.02 11.94 0.00 0.00 41611.26 8074.04 34603.01 00:42:13.744 =================================================================================================================== 00:42:13.744 Total : 18314.33 71.54 0.00 0.00 27823.03 1677.72 34603.01 00:42:15.121 00:42:15.121 real 0m8.197s 00:42:15.121 user 0m14.688s 00:42:15.121 sys 0m0.410s 00:42:15.121 16:59:11 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:15.121 16:59:11 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:42:15.121 ************************************ 00:42:15.121 END TEST bdev_verify 00:42:15.121 ************************************ 00:42:15.380 16:59:12 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:15.380 16:59:12 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:42:15.380 16:59:12 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:15.380 16:59:12 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:15.380 ************************************ 00:42:15.380 START TEST bdev_verify_big_io 00:42:15.380 ************************************ 00:42:15.380 16:59:12 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:15.380 [2024-07-24 16:59:12.157899] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:15.380 [2024-07-24 16:59:12.158015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1899272 ] 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.640 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:15.640 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:15.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:15.641 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:15.641 [2024-07-24 16:59:12.383894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:15.899 [2024-07-24 16:59:12.670939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.899 [2024-07-24 16:59:12.670945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.466 [2024-07-24 16:59:13.209628] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:16.466 [2024-07-24 16:59:13.209712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:16.466 [2024-07-24 16:59:13.209732] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:16.466 [2024-07-24 16:59:13.217630] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:16.466 [2024-07-24 16:59:13.217669] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:16.466 [2024-07-24 16:59:13.217685] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:16.466 [2024-07-24 16:59:13.225657] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:16.466 [2024-07-24 16:59:13.225693] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:16.466 [2024-07-24 16:59:13.225708] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:16.731 Running I/O for 5 seconds... 00:42:21.998 00:42:21.998 Latency(us) 00:42:21.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:21.998 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:21.998 Verification LBA range: start 0x0 length 0x80 00:42:21.998 crypto_ram : 5.04 456.81 28.55 0.00 0.00 273544.68 6474.96 377487.36 00:42:21.998 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:21.998 Verification LBA range: start 0x80 length 0x80 00:42:21.998 crypto_ram : 5.03 457.75 28.61 0.00 0.00 273072.08 7182.75 375809.64 00:42:21.998 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:21.998 Verification LBA range: start 0x0 length 0x80 00:42:21.998 crypto_ram3 : 5.22 245.07 15.32 0.00 0.00 490469.87 5950.67 380842.80 00:42:21.998 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:21.998 Verification LBA range: start 0x80 length 0x80 00:42:21.998 crypto_ram3 : 5.21 245.58 15.35 0.00 0.00 489629.39 6448.74 379165.08 00:42:21.998 =================================================================================================================== 00:42:21.998 Total : 1405.20 87.83 0.00 0.00 350715.97 5950.67 380842.80 00:42:23.902 00:42:23.902 real 0m8.439s 00:42:23.902 user 0m15.174s 00:42:23.902 sys 0m0.401s 00:42:23.902 16:59:20 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:23.902 16:59:20 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:42:23.902 ************************************ 00:42:23.902 END TEST bdev_verify_big_io 00:42:23.902 ************************************ 00:42:23.902 16:59:20 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:23.902 16:59:20 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:23.902 16:59:20 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:23.902 16:59:20 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:23.902 ************************************ 00:42:23.902 START TEST bdev_write_zeroes 00:42:23.902 ************************************ 00:42:23.902 16:59:20 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:23.902 [2024-07-24 16:59:20.664730] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:23.902 [2024-07-24 16:59:20.664841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1900610 ] 00:42:24.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.160 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:24.161 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:24.161 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:24.161 [2024-07-24 16:59:20.889344] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:24.420 [2024-07-24 16:59:21.172341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.986 [2024-07-24 16:59:21.738367] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:24.986 [2024-07-24 16:59:21.738446] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:24.986 [2024-07-24 16:59:21.738465] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:24.986 [2024-07-24 16:59:21.746384] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:24.986 [2024-07-24 16:59:21.746424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:24.986 [2024-07-24 16:59:21.746440] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:24.986 [2024-07-24 16:59:21.754403] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:24.986 [2024-07-24 16:59:21.754438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:24.986 [2024-07-24 16:59:21.754453] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:24.986 Running I/O for 1 seconds... 00:42:26.362 00:42:26.362 Latency(us) 00:42:26.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:26.362 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:26.362 crypto_ram : 1.01 26676.49 104.21 0.00 0.00 4786.40 1291.06 6658.46 00:42:26.362 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:26.362 crypto_ram3 : 1.01 13311.20 52.00 0.00 0.00 9541.49 6029.31 9909.04 00:42:26.362 =================================================================================================================== 00:42:26.362 Total : 39987.69 156.20 0.00 0.00 6371.43 1291.06 9909.04 00:42:28.266 00:42:28.266 real 0m4.062s 00:42:28.266 user 0m3.661s 00:42:28.266 sys 0m0.370s 00:42:28.266 16:59:24 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:28.266 16:59:24 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:42:28.266 ************************************ 00:42:28.266 END TEST bdev_write_zeroes 00:42:28.266 ************************************ 00:42:28.266 16:59:24 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:28.266 16:59:24 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:28.266 16:59:24 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:28.266 16:59:24 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:28.266 ************************************ 00:42:28.266 START TEST bdev_json_nonenclosed 00:42:28.266 ************************************ 00:42:28.266 16:59:24 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:28.266 [2024-07-24 16:59:24.811696] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:28.266 [2024-07-24 16:59:24.811809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901223 ] 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:28.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:28.266 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:28.266 [2024-07-24 16:59:25.037275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:28.525 [2024-07-24 16:59:25.303243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:28.525 [2024-07-24 16:59:25.303341] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:28.525 [2024-07-24 16:59:25.303368] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:28.525 [2024-07-24 16:59:25.303384] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:29.092 00:42:29.092 real 0m1.156s 00:42:29.092 user 0m0.881s 00:42:29.092 sys 0m0.269s 00:42:29.092 16:59:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:29.092 16:59:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:42:29.092 ************************************ 00:42:29.092 END TEST bdev_json_nonenclosed 00:42:29.092 ************************************ 00:42:29.092 16:59:25 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:29.092 16:59:25 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:29.092 16:59:25 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:29.092 16:59:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:29.092 ************************************ 00:42:29.092 START TEST bdev_json_nonarray 00:42:29.092 ************************************ 00:42:29.092 16:59:25 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:29.351 [2024-07-24 16:59:26.049571] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:29.351 [2024-07-24 16:59:26.049684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901438 ] 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:29.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.351 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:29.610 [2024-07-24 16:59:26.277348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:29.869 [2024-07-24 16:59:26.547454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.869 [2024-07-24 16:59:26.547541] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:29.869 [2024-07-24 16:59:26.547576] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:29.869 [2024-07-24 16:59:26.547592] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:30.436 00:42:30.436 real 0m1.185s 00:42:30.436 user 0m0.923s 00:42:30.436 sys 0m0.254s 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:42:30.436 ************************************ 00:42:30.436 END TEST bdev_json_nonarray 00:42:30.436 ************************************ 00:42:30.436 16:59:27 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:42:30.436 16:59:27 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:42:30.436 16:59:27 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:42:30.436 16:59:27 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:42:30.436 16:59:27 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:30.436 16:59:27 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:30.436 16:59:27 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:30.436 ************************************ 00:42:30.436 START TEST bdev_crypto_enomem 00:42:30.436 ************************************ 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=1901716 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 1901716 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 1901716 ']' 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:30.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:30.436 16:59:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:30.695 [2024-07-24 16:59:27.306480] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:30.695 [2024-07-24 16:59:27.306601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1901716 ] 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:30.695 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.695 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:30.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.696 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:30.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:30.696 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:30.696 [2024-07-24 16:59:27.519711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:30.954 [2024-07-24 16:59:27.795539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:31.522 true 00:42:31.522 base0 00:42:31.522 true 00:42:31.522 [2024-07-24 16:59:28.309178] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:31.522 crypt0 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:31.522 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:31.522 [ 00:42:31.522 { 00:42:31.522 "name": "crypt0", 00:42:31.522 "aliases": [ 00:42:31.522 "446a351d-6b63-57b6-993c-46dc935e8b36" 00:42:31.522 ], 00:42:31.522 "product_name": "crypto", 00:42:31.522 "block_size": 512, 00:42:31.522 "num_blocks": 2097152, 00:42:31.522 "uuid": "446a351d-6b63-57b6-993c-46dc935e8b36", 00:42:31.522 "assigned_rate_limits": { 00:42:31.522 "rw_ios_per_sec": 0, 00:42:31.522 "rw_mbytes_per_sec": 0, 00:42:31.522 "r_mbytes_per_sec": 0, 00:42:31.522 "w_mbytes_per_sec": 0 00:42:31.522 }, 00:42:31.522 "claimed": false, 00:42:31.522 "zoned": false, 00:42:31.522 "supported_io_types": { 00:42:31.523 "read": true, 00:42:31.523 "write": true, 00:42:31.523 "unmap": false, 00:42:31.523 "flush": false, 00:42:31.523 "reset": true, 00:42:31.523 "nvme_admin": false, 00:42:31.523 "nvme_io": false, 00:42:31.523 "nvme_io_md": false, 00:42:31.523 "write_zeroes": true, 00:42:31.523 "zcopy": false, 00:42:31.523 "get_zone_info": false, 00:42:31.523 "zone_management": false, 00:42:31.523 "zone_append": false, 00:42:31.523 "compare": false, 00:42:31.523 "compare_and_write": false, 00:42:31.523 "abort": false, 00:42:31.523 "seek_hole": false, 00:42:31.523 "seek_data": false, 00:42:31.523 "copy": false, 00:42:31.523 "nvme_iov_md": false 00:42:31.523 }, 00:42:31.523 "memory_domains": [ 00:42:31.523 { 00:42:31.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:31.523 "dma_device_type": 2 00:42:31.523 } 00:42:31.523 ], 00:42:31.523 "driver_specific": { 00:42:31.523 "crypto": { 00:42:31.523 "base_bdev_name": "EE_base0", 00:42:31.523 "name": "crypt0", 00:42:31.523 "key_name": "test_dek_sw" 00:42:31.523 } 00:42:31.523 } 00:42:31.523 } 00:42:31.523 ] 00:42:31.523 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:31.523 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:42:31.523 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=1901921 00:42:31.523 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:42:31.523 16:59:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:42:31.781 Running I/O for 5 seconds... 00:42:32.717 16:59:29 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:42:32.717 16:59:29 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:32.717 16:59:29 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:32.717 16:59:29 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:32.717 16:59:29 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 1901921 00:42:36.969 00:42:36.969 Latency(us) 00:42:36.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.969 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:42:36.969 crypt0 : 5.00 35966.19 140.49 0.00 0.00 885.41 419.43 1382.81 00:42:36.969 =================================================================================================================== 00:42:36.969 Total : 35966.19 140.49 0.00 0.00 885.41 419.43 1382.81 00:42:36.969 0 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 1901716 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 1901716 ']' 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 1901716 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1901716 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1901716' 00:42:36.969 killing process with pid 1901716 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 1901716 00:42:36.969 Received shutdown signal, test time was about 5.000000 seconds 00:42:36.969 00:42:36.969 Latency(us) 00:42:36.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:36.969 =================================================================================================================== 00:42:36.969 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:36.969 16:59:33 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 1901716 00:42:38.870 16:59:35 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:42:38.870 00:42:38.870 real 0m8.006s 00:42:38.870 user 0m8.067s 00:42:38.870 sys 0m0.535s 00:42:38.870 16:59:35 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:38.870 16:59:35 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:38.870 ************************************ 00:42:38.870 END TEST bdev_crypto_enomem 00:42:38.870 ************************************ 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:42:38.870 16:59:35 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:42:38.870 00:42:38.870 real 1m19.800s 00:42:38.870 user 2m19.000s 00:42:38.870 sys 0m8.677s 00:42:38.870 16:59:35 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:38.870 16:59:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:38.870 ************************************ 00:42:38.870 END TEST blockdev_crypto_sw 00:42:38.870 ************************************ 00:42:38.870 16:59:35 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:42:38.870 16:59:35 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:38.870 16:59:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:38.870 16:59:35 -- common/autotest_common.sh@10 -- # set +x 00:42:38.870 ************************************ 00:42:38.870 START TEST blockdev_crypto_qat 00:42:38.870 ************************************ 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:42:38.870 * Looking for test storage... 00:42:38.870 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=1903096 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 1903096 00:42:38.870 16:59:35 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 1903096 ']' 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:38.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:38.870 16:59:35 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:38.870 [2024-07-24 16:59:35.569517] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:38.870 [2024-07-24 16:59:35.569636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1903096 ] 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.870 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.870 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.870 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.870 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.870 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.870 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:38.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:38.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:38.871 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:39.128 [2024-07-24 16:59:35.795361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:39.386 [2024-07-24 16:59:36.084845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.644 16:59:36 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:39.644 16:59:36 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:42:39.644 16:59:36 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:42:39.644 16:59:36 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:42:39.644 16:59:36 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:42:39.644 16:59:36 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:39.644 16:59:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:39.644 [2024-07-24 16:59:36.418535] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:42:39.644 [2024-07-24 16:59:36.426594] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:42:39.644 [2024-07-24 16:59:36.434611] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:42:40.210 [2024-07-24 16:59:36.793879] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:42:43.493 true 00:42:43.493 true 00:42:43.493 true 00:42:43.493 true 00:42:43.752 Malloc0 00:42:43.752 Malloc1 00:42:43.752 Malloc2 00:42:43.752 Malloc3 00:42:43.752 [2024-07-24 16:59:40.544418] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:42:43.752 crypto_ram 00:42:43.752 [2024-07-24 16:59:40.552569] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:42:43.752 crypto_ram1 00:42:43.752 [2024-07-24 16:59:40.560716] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:42:43.752 crypto_ram2 00:42:43.752 [2024-07-24 16:59:40.568765] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:42:43.752 crypto_ram3 00:42:43.752 [ 00:42:43.752 { 00:42:43.752 "name": "Malloc1", 00:42:43.752 "aliases": [ 00:42:43.752 "1a8bca94-fd87-42e9-a350-30b6f511dd86" 00:42:43.752 ], 00:42:43.752 "product_name": "Malloc disk", 00:42:43.752 "block_size": 512, 00:42:43.752 "num_blocks": 65536, 00:42:43.752 "uuid": "1a8bca94-fd87-42e9-a350-30b6f511dd86", 00:42:43.752 "assigned_rate_limits": { 00:42:43.752 "rw_ios_per_sec": 0, 00:42:43.752 "rw_mbytes_per_sec": 0, 00:42:43.752 "r_mbytes_per_sec": 0, 00:42:43.752 "w_mbytes_per_sec": 0 00:42:43.752 }, 00:42:43.752 "claimed": true, 00:42:43.752 "claim_type": "exclusive_write", 00:42:43.752 "zoned": false, 00:42:43.752 "supported_io_types": { 00:42:43.752 "read": true, 00:42:43.752 "write": true, 00:42:43.752 "unmap": true, 00:42:43.752 "flush": true, 00:42:43.752 "reset": true, 00:42:43.752 "nvme_admin": false, 00:42:43.752 "nvme_io": false, 00:42:43.752 "nvme_io_md": false, 00:42:43.752 "write_zeroes": true, 00:42:43.752 "zcopy": true, 00:42:43.752 "get_zone_info": false, 00:42:43.752 "zone_management": false, 00:42:43.752 "zone_append": false, 00:42:43.752 "compare": false, 00:42:43.752 "compare_and_write": false, 00:42:43.752 "abort": true, 00:42:43.752 "seek_hole": false, 00:42:43.752 "seek_data": false, 00:42:43.752 "copy": true, 00:42:43.752 "nvme_iov_md": false 00:42:43.752 }, 00:42:43.752 "memory_domains": [ 00:42:43.752 { 00:42:43.752 "dma_device_id": "system", 00:42:43.752 "dma_device_type": 1 00:42:43.752 }, 00:42:43.752 { 00:42:43.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:43.752 "dma_device_type": 2 00:42:43.752 } 00:42:43.752 ], 00:42:43.752 "driver_specific": {} 00:42:43.752 } 00:42:43.752 ] 00:42:43.752 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.752 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:42:43.752 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.752 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:43.752 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:43.752 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:42:43.752 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:42:43.752 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:43.752 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "928e1e7d-f372-5295-9a20-e5bd27f14170"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "928e1e7d-f372-5295-9a20-e5bd27f14170",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "b0cc1dbb-2775-58e8-937a-b0164ae15c82"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b0cc1dbb-2775-58e8-937a-b0164ae15c82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "2390905d-399b-5c6b-a399-26f87ba7674e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2390905d-399b-5c6b-a399-26f87ba7674e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "6b3bacba-5f32-5c50-903d-83aa8c21a0ef"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "6b3bacba-5f32-5c50-903d-83aa8c21a0ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:42:44.011 16:59:40 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 1903096 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 1903096 ']' 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 1903096 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1903096 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1903096' 00:42:44.011 killing process with pid 1903096 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 1903096 00:42:44.011 16:59:40 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 1903096 00:42:48.216 16:59:45 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:48.216 16:59:45 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:42:48.216 16:59:45 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:42:48.216 16:59:45 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:48.216 16:59:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:48.474 ************************************ 00:42:48.474 START TEST bdev_hello_world 00:42:48.474 ************************************ 00:42:48.474 16:59:45 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:42:48.474 [2024-07-24 16:59:45.178855] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:48.475 [2024-07-24 16:59:45.178965] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904687 ] 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:48.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:48.475 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:48.733 [2024-07-24 16:59:45.401921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:48.992 [2024-07-24 16:59:45.685959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:48.992 [2024-07-24 16:59:45.707736] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:42:48.992 [2024-07-24 16:59:45.715759] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:42:48.992 [2024-07-24 16:59:45.723767] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:42:49.559 [2024-07-24 16:59:46.113012] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:42:52.092 [2024-07-24 16:59:48.921813] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:42:52.092 [2024-07-24 16:59:48.921898] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:52.092 [2024-07-24 16:59:48.921917] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:52.092 [2024-07-24 16:59:48.929825] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:42:52.092 [2024-07-24 16:59:48.929864] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:52.092 [2024-07-24 16:59:48.929884] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:52.092 [2024-07-24 16:59:48.937871] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:42:52.092 [2024-07-24 16:59:48.937905] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:42:52.092 [2024-07-24 16:59:48.937920] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:52.092 [2024-07-24 16:59:48.945868] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:42:52.092 [2024-07-24 16:59:48.945901] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:42:52.092 [2024-07-24 16:59:48.945916] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:52.350 [2024-07-24 16:59:49.211569] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:52.350 [2024-07-24 16:59:49.211619] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:42:52.350 [2024-07-24 16:59:49.211647] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:52.609 [2024-07-24 16:59:49.213903] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:52.609 [2024-07-24 16:59:49.214009] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:52.609 [2024-07-24 16:59:49.214033] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:52.609 [2024-07-24 16:59:49.214095] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:52.609 00:42:52.609 [2024-07-24 16:59:49.214121] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:55.169 00:42:55.169 real 0m6.702s 00:42:55.169 user 0m6.142s 00:42:55.169 sys 0m0.510s 00:42:55.169 16:59:51 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:55.169 16:59:51 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:55.170 ************************************ 00:42:55.170 END TEST bdev_hello_world 00:42:55.170 ************************************ 00:42:55.170 16:59:51 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:42:55.170 16:59:51 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:55.170 16:59:51 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:55.170 16:59:51 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:55.170 ************************************ 00:42:55.170 START TEST bdev_bounds 00:42:55.170 ************************************ 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=1905752 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 1905752' 00:42:55.170 Process bdevio pid: 1905752 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 1905752 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 1905752 ']' 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:55.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:55.170 16:59:51 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:55.170 [2024-07-24 16:59:51.937974] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:42:55.170 [2024-07-24 16:59:51.938191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905752 ] 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:55.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:55.429 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:55.429 [2024-07-24 16:59:52.165413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:55.688 [2024-07-24 16:59:52.458937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:55.688 [2024-07-24 16:59:52.459005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.688 [2024-07-24 16:59:52.459007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:55.688 [2024-07-24 16:59:52.480820] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:42:55.688 [2024-07-24 16:59:52.488836] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:42:55.688 [2024-07-24 16:59:52.496868] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:42:56.255 [2024-07-24 16:59:52.862376] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:42:59.542 [2024-07-24 16:59:55.702114] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:42:59.542 [2024-07-24 16:59:55.702192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:59.542 [2024-07-24 16:59:55.702214] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:59.542 [2024-07-24 16:59:55.710128] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:42:59.542 [2024-07-24 16:59:55.710171] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:59.542 [2024-07-24 16:59:55.710187] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:59.542 [2024-07-24 16:59:55.718189] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:42:59.542 [2024-07-24 16:59:55.718222] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:42:59.542 [2024-07-24 16:59:55.718238] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:59.542 [2024-07-24 16:59:55.726182] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:42:59.542 [2024-07-24 16:59:55.726235] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:42:59.542 [2024-07-24 16:59:55.726251] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:59.542 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:59.542 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:42:59.542 16:59:56 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:59.542 I/O targets: 00:42:59.542 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:42:59.542 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:42:59.542 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:42:59.542 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:42:59.542 00:42:59.542 00:42:59.542 CUnit - A unit testing framework for C - Version 2.1-3 00:42:59.542 http://cunit.sourceforge.net/ 00:42:59.542 00:42:59.542 00:42:59.542 Suite: bdevio tests on: crypto_ram3 00:42:59.542 Test: blockdev write read block ...passed 00:42:59.542 Test: blockdev write zeroes read block ...passed 00:42:59.542 Test: blockdev write zeroes read no split ...passed 00:42:59.542 Test: blockdev write zeroes read split ...passed 00:42:59.542 Test: blockdev write zeroes read split partial ...passed 00:42:59.542 Test: blockdev reset ...passed 00:42:59.542 Test: blockdev write read 8 blocks ...passed 00:42:59.542 Test: blockdev write read size > 128k ...passed 00:42:59.542 Test: blockdev write read invalid size ...passed 00:42:59.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:59.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:59.542 Test: blockdev write read max offset ...passed 00:42:59.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:59.542 Test: blockdev writev readv 8 blocks ...passed 00:42:59.542 Test: blockdev writev readv 30 x 1block ...passed 00:42:59.542 Test: blockdev writev readv block ...passed 00:42:59.542 Test: blockdev writev readv size > 128k ...passed 00:42:59.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:59.542 Test: blockdev comparev and writev ...passed 00:42:59.542 Test: blockdev nvme passthru rw ...passed 00:42:59.542 Test: blockdev nvme passthru vendor specific ...passed 00:42:59.542 Test: blockdev nvme admin passthru ...passed 00:42:59.542 Test: blockdev copy ...passed 00:42:59.542 Suite: bdevio tests on: crypto_ram2 00:42:59.542 Test: blockdev write read block ...passed 00:42:59.542 Test: blockdev write zeroes read block ...passed 00:42:59.542 Test: blockdev write zeroes read no split ...passed 00:42:59.542 Test: blockdev write zeroes read split ...passed 00:42:59.801 Test: blockdev write zeroes read split partial ...passed 00:42:59.801 Test: blockdev reset ...passed 00:42:59.801 Test: blockdev write read 8 blocks ...passed 00:42:59.801 Test: blockdev write read size > 128k ...passed 00:42:59.801 Test: blockdev write read invalid size ...passed 00:42:59.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:59.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:59.801 Test: blockdev write read max offset ...passed 00:42:59.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:59.801 Test: blockdev writev readv 8 blocks ...passed 00:42:59.801 Test: blockdev writev readv 30 x 1block ...passed 00:42:59.801 Test: blockdev writev readv block ...passed 00:42:59.801 Test: blockdev writev readv size > 128k ...passed 00:42:59.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:59.801 Test: blockdev comparev and writev ...passed 00:42:59.801 Test: blockdev nvme passthru rw ...passed 00:42:59.801 Test: blockdev nvme passthru vendor specific ...passed 00:42:59.801 Test: blockdev nvme admin passthru ...passed 00:42:59.801 Test: blockdev copy ...passed 00:42:59.801 Suite: bdevio tests on: crypto_ram1 00:42:59.801 Test: blockdev write read block ...passed 00:42:59.801 Test: blockdev write zeroes read block ...passed 00:42:59.801 Test: blockdev write zeroes read no split ...passed 00:42:59.801 Test: blockdev write zeroes read split ...passed 00:42:59.801 Test: blockdev write zeroes read split partial ...passed 00:42:59.801 Test: blockdev reset ...passed 00:42:59.801 Test: blockdev write read 8 blocks ...passed 00:42:59.801 Test: blockdev write read size > 128k ...passed 00:42:59.801 Test: blockdev write read invalid size ...passed 00:42:59.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:59.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:59.801 Test: blockdev write read max offset ...passed 00:42:59.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:59.801 Test: blockdev writev readv 8 blocks ...passed 00:42:59.801 Test: blockdev writev readv 30 x 1block ...passed 00:42:59.801 Test: blockdev writev readv block ...passed 00:42:59.801 Test: blockdev writev readv size > 128k ...passed 00:42:59.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:59.801 Test: blockdev comparev and writev ...passed 00:42:59.801 Test: blockdev nvme passthru rw ...passed 00:42:59.801 Test: blockdev nvme passthru vendor specific ...passed 00:42:59.801 Test: blockdev nvme admin passthru ...passed 00:42:59.801 Test: blockdev copy ...passed 00:42:59.801 Suite: bdevio tests on: crypto_ram 00:42:59.801 Test: blockdev write read block ...passed 00:42:59.801 Test: blockdev write zeroes read block ...passed 00:42:59.801 Test: blockdev write zeroes read no split ...passed 00:43:00.060 Test: blockdev write zeroes read split ...passed 00:43:00.060 Test: blockdev write zeroes read split partial ...passed 00:43:00.060 Test: blockdev reset ...passed 00:43:00.060 Test: blockdev write read 8 blocks ...passed 00:43:00.060 Test: blockdev write read size > 128k ...passed 00:43:00.060 Test: blockdev write read invalid size ...passed 00:43:00.060 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:00.060 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:00.060 Test: blockdev write read max offset ...passed 00:43:00.060 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:00.060 Test: blockdev writev readv 8 blocks ...passed 00:43:00.060 Test: blockdev writev readv 30 x 1block ...passed 00:43:00.060 Test: blockdev writev readv block ...passed 00:43:00.060 Test: blockdev writev readv size > 128k ...passed 00:43:00.060 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:00.060 Test: blockdev comparev and writev ...passed 00:43:00.060 Test: blockdev nvme passthru rw ...passed 00:43:00.060 Test: blockdev nvme passthru vendor specific ...passed 00:43:00.060 Test: blockdev nvme admin passthru ...passed 00:43:00.060 Test: blockdev copy ...passed 00:43:00.060 00:43:00.060 Run Summary: Type Total Ran Passed Failed Inactive 00:43:00.060 suites 4 4 n/a 0 0 00:43:00.060 tests 92 92 92 0 0 00:43:00.060 asserts 520 520 520 0 n/a 00:43:00.060 00:43:00.060 Elapsed time = 1.556 seconds 00:43:00.060 0 00:43:00.060 16:59:56 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 1905752 00:43:00.060 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 1905752 ']' 00:43:00.060 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 1905752 00:43:00.060 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:43:00.060 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:00.060 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1905752 00:43:00.318 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:00.318 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:00.318 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1905752' 00:43:00.318 killing process with pid 1905752 00:43:00.318 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 1905752 00:43:00.318 16:59:56 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 1905752 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:02.851 00:43:02.851 real 0m7.553s 00:43:02.851 user 0m20.336s 00:43:02.851 sys 0m0.751s 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:02.851 ************************************ 00:43:02.851 END TEST bdev_bounds 00:43:02.851 ************************************ 00:43:02.851 16:59:59 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:43:02.851 16:59:59 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:43:02.851 16:59:59 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:02.851 16:59:59 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:02.851 ************************************ 00:43:02.851 START TEST bdev_nbd 00:43:02.851 ************************************ 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=1906927 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 1906927 /var/tmp/spdk-nbd.sock 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 1906927 ']' 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:02.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:02.851 16:59:59 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:02.851 [2024-07-24 16:59:59.598999] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:43:02.851 [2024-07-24 16:59:59.599118] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:03.110 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:03.110 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:03.110 [2024-07-24 16:59:59.827712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.369 [2024-07-24 17:00:00.120844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.369 [2024-07-24 17:00:00.142616] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:03.369 [2024-07-24 17:00:00.150650] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:03.369 [2024-07-24 17:00:00.158675] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:03.935 [2024-07-24 17:00:00.527685] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:07.213 [2024-07-24 17:00:03.367321] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:07.213 [2024-07-24 17:00:03.367392] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:07.213 [2024-07-24 17:00:03.367412] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.213 [2024-07-24 17:00:03.375335] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:07.213 [2024-07-24 17:00:03.375376] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:07.213 [2024-07-24 17:00:03.375394] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.213 [2024-07-24 17:00:03.383382] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:07.213 [2024-07-24 17:00:03.383418] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:07.213 [2024-07-24 17:00:03.383433] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.213 [2024-07-24 17:00:03.391358] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:07.213 [2024-07-24 17:00:03.391392] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:07.213 [2024-07-24 17:00:03.391406] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:07.213 17:00:03 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:07.469 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:07.470 1+0 records in 00:43:07.470 1+0 records out 00:43:07.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309673 s, 13.2 MB/s 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:07.470 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:07.727 1+0 records in 00:43:07.727 1+0 records out 00:43:07.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459754 s, 8.9 MB/s 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:07.727 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:07.728 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:07.985 1+0 records in 00:43:07.985 1+0 records out 00:43:07.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359068 s, 11.4 MB/s 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:07.985 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:07.986 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:07.986 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:07.986 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:07.986 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:07.986 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:07.986 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:08.243 1+0 records in 00:43:08.243 1+0 records out 00:43:08.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388236 s, 10.6 MB/s 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:08.243 17:00:04 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:08.500 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd0", 00:43:08.500 "bdev_name": "crypto_ram" 00:43:08.500 }, 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd1", 00:43:08.500 "bdev_name": "crypto_ram1" 00:43:08.500 }, 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd2", 00:43:08.500 "bdev_name": "crypto_ram2" 00:43:08.500 }, 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd3", 00:43:08.500 "bdev_name": "crypto_ram3" 00:43:08.500 } 00:43:08.500 ]' 00:43:08.500 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:08.500 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd0", 00:43:08.500 "bdev_name": "crypto_ram" 00:43:08.500 }, 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd1", 00:43:08.500 "bdev_name": "crypto_ram1" 00:43:08.500 }, 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd2", 00:43:08.500 "bdev_name": "crypto_ram2" 00:43:08.500 }, 00:43:08.500 { 00:43:08.500 "nbd_device": "/dev/nbd3", 00:43:08.500 "bdev_name": "crypto_ram3" 00:43:08.500 } 00:43:08.500 ]' 00:43:08.500 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:08.500 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:43:08.501 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.501 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:43:08.501 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:08.501 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:08.501 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:08.501 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:08.758 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:09.015 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:09.272 17:00:05 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.530 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:09.788 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:09.789 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:09.789 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:09.789 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:09.789 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:09.789 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:43:10.049 /dev/nbd0 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:10.049 1+0 records in 00:43:10.049 1+0 records out 00:43:10.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311694 s, 13.1 MB/s 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.049 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:10.050 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.050 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:10.050 17:00:06 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:10.050 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:10.050 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:10.050 17:00:06 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:43:10.309 /dev/nbd1 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:10.309 1+0 records in 00:43:10.309 1+0 records out 00:43:10.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363444 s, 11.3 MB/s 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:10.309 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:43:10.629 /dev/nbd10 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:10.629 1+0 records in 00:43:10.629 1+0 records out 00:43:10.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215011 s, 19.1 MB/s 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:10.629 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:43:10.887 /dev/nbd11 00:43:10.887 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:43:10.887 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:43:10.887 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:43:10.887 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:10.889 1+0 records in 00:43:10.889 1+0 records out 00:43:10.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359894 s, 11.4 MB/s 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:10.889 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd0", 00:43:11.148 "bdev_name": "crypto_ram" 00:43:11.148 }, 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd1", 00:43:11.148 "bdev_name": "crypto_ram1" 00:43:11.148 }, 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd10", 00:43:11.148 "bdev_name": "crypto_ram2" 00:43:11.148 }, 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd11", 00:43:11.148 "bdev_name": "crypto_ram3" 00:43:11.148 } 00:43:11.148 ]' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd0", 00:43:11.148 "bdev_name": "crypto_ram" 00:43:11.148 }, 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd1", 00:43:11.148 "bdev_name": "crypto_ram1" 00:43:11.148 }, 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd10", 00:43:11.148 "bdev_name": "crypto_ram2" 00:43:11.148 }, 00:43:11.148 { 00:43:11.148 "nbd_device": "/dev/nbd11", 00:43:11.148 "bdev_name": "crypto_ram3" 00:43:11.148 } 00:43:11.148 ]' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:43:11.148 /dev/nbd1 00:43:11.148 /dev/nbd10 00:43:11.148 /dev/nbd11' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:43:11.148 /dev/nbd1 00:43:11.148 /dev/nbd10 00:43:11.148 /dev/nbd11' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:11.148 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:11.149 256+0 records in 00:43:11.149 256+0 records out 00:43:11.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101434 s, 103 MB/s 00:43:11.149 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:11.149 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:11.149 256+0 records in 00:43:11.149 256+0 records out 00:43:11.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0653224 s, 16.1 MB/s 00:43:11.149 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:11.149 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:43:11.149 256+0 records in 00:43:11.149 256+0 records out 00:43:11.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0491042 s, 21.4 MB/s 00:43:11.149 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:11.149 17:00:07 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:43:11.406 256+0 records in 00:43:11.406 256+0 records out 00:43:11.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0403331 s, 26.0 MB/s 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:43:11.406 256+0 records in 00:43:11.406 256+0 records out 00:43:11.406 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0367954 s, 28.5 MB/s 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:11.406 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:11.663 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:11.921 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:12.178 17:00:08 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:43:12.434 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:43:12.434 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:43:12.434 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:43:12.434 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:12.437 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:12.694 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:13.259 malloc_lvol_verify 00:43:13.259 17:00:09 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:13.517 b55bee90-68fb-46ba-9a86-5c15b8e5193b 00:43:13.517 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:13.775 0ee718e8-00a5-4154-bf36-d3e2b43cca06 00:43:13.775 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:13.775 /dev/nbd0 00:43:13.775 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:13.775 mke2fs 1.46.5 (30-Dec-2021) 00:43:13.775 Discarding device blocks: 0/4096 done 00:43:13.775 Creating filesystem with 4096 1k blocks and 1024 inodes 00:43:13.775 00:43:13.775 Allocating group tables: 0/1 done 00:43:14.032 Writing inode tables: 0/1 done 00:43:14.032 Creating journal (1024 blocks): done 00:43:14.032 Writing superblocks and filesystem accounting information: 0/1 done 00:43:14.032 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:14.032 17:00:10 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 1906927 00:43:14.033 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 1906927 ']' 00:43:14.033 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 1906927 00:43:14.033 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1906927 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1906927' 00:43:14.290 killing process with pid 1906927 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 1906927 00:43:14.290 17:00:10 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 1906927 00:43:16.817 17:00:13 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:16.817 00:43:16.817 real 0m14.105s 00:43:16.817 user 0m17.310s 00:43:16.817 sys 0m4.102s 00:43:16.817 17:00:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:16.817 17:00:13 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:16.817 ************************************ 00:43:16.817 END TEST bdev_nbd 00:43:16.817 ************************************ 00:43:16.817 17:00:13 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:43:16.817 17:00:13 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:43:16.817 17:00:13 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:43:16.817 17:00:13 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:43:16.817 17:00:13 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:16.817 17:00:13 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:16.817 17:00:13 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:16.817 ************************************ 00:43:16.817 START TEST bdev_fio 00:43:16.817 ************************************ 00:43:16.817 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:43:16.817 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:43:16.817 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:43:16.818 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:17.076 ************************************ 00:43:17.076 START TEST bdev_fio_rw_verify 00:43:17.076 ************************************ 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:17.076 17:00:13 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:17.668 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:17.668 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:17.668 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:17.668 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:17.668 fio-3.35 00:43:17.668 Starting 4 threads 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:17.668 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.668 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:32.563 00:43:32.563 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1910854: Wed Jul 24 17:00:28 2024 00:43:32.563 read: IOPS=23.1k, BW=90.1MiB/s (94.4MB/s)(901MiB/10001msec) 00:43:32.563 slat (usec): min=18, max=462, avg=57.50, stdev=32.31 00:43:32.563 clat (usec): min=19, max=1501, avg=326.95, stdev=205.89 00:43:32.563 lat (usec): min=39, max=1653, avg=384.45, stdev=223.51 00:43:32.563 clat percentiles (usec): 00:43:32.563 | 50.000th=[ 262], 99.000th=[ 971], 99.900th=[ 1156], 99.990th=[ 1270], 00:43:32.563 | 99.999th=[ 1369] 00:43:32.563 write: IOPS=25.3k, BW=98.9MiB/s (104MB/s)(965MiB/9755msec); 0 zone resets 00:43:32.563 slat (usec): min=29, max=460, avg=71.11, stdev=32.92 00:43:32.563 clat (usec): min=26, max=2019, avg=372.74, stdev=223.17 00:43:32.563 lat (usec): min=65, max=2398, avg=443.85, stdev=240.62 00:43:32.563 clat percentiles (usec): 00:43:32.563 | 50.000th=[ 322], 99.000th=[ 1074], 99.900th=[ 1254], 99.990th=[ 1385], 00:43:32.563 | 99.999th=[ 1860] 00:43:32.563 bw ( KiB/s): min=79624, max=126448, per=97.94%, avg=99209.26, stdev=3355.37, samples=76 00:43:32.563 iops : min=19906, max=31612, avg=24802.32, stdev=838.84, samples=76 00:43:32.563 lat (usec) : 20=0.01%, 50=0.01%, 100=3.11%, 250=37.50%, 500=38.30% 00:43:32.563 lat (usec) : 750=14.79%, 1000=4.87% 00:43:32.563 lat (msec) : 2=1.42%, 4=0.01% 00:43:32.563 cpu : usr=99.24%, sys=0.27%, ctx=81, majf=0, minf=24577 00:43:32.563 IO depths : 1=4.5%, 2=27.3%, 4=54.6%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.563 complete : 0=0.0%, 4=88.0%, 8=12.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.563 issued rwts: total=230562,247029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.563 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:32.563 00:43:32.563 Run status group 0 (all jobs): 00:43:32.563 READ: bw=90.1MiB/s (94.4MB/s), 90.1MiB/s-90.1MiB/s (94.4MB/s-94.4MB/s), io=901MiB (944MB), run=10001-10001msec 00:43:32.563 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=965MiB (1012MB), run=9755-9755msec 00:43:33.939 ----------------------------------------------------- 00:43:33.939 Suppressions used: 00:43:33.939 count bytes template 00:43:33.939 4 47 /usr/src/fio/parse.c 00:43:33.939 1860 178560 /usr/src/fio/iolog.c 00:43:33.939 1 8 libtcmalloc_minimal.so 00:43:33.939 1 904 libcrypto.so 00:43:33.939 ----------------------------------------------------- 00:43:33.939 00:43:33.939 00:43:33.939 real 0m16.660s 00:43:33.939 user 0m56.842s 00:43:33.939 sys 0m0.967s 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:43:33.940 ************************************ 00:43:33.940 END TEST bdev_fio_rw_verify 00:43:33.940 ************************************ 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "928e1e7d-f372-5295-9a20-e5bd27f14170"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "928e1e7d-f372-5295-9a20-e5bd27f14170",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "b0cc1dbb-2775-58e8-937a-b0164ae15c82"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b0cc1dbb-2775-58e8-937a-b0164ae15c82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "2390905d-399b-5c6b-a399-26f87ba7674e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2390905d-399b-5c6b-a399-26f87ba7674e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "6b3bacba-5f32-5c50-903d-83aa8c21a0ef"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "6b3bacba-5f32-5c50-903d-83aa8c21a0ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:43:33.940 crypto_ram1 00:43:33.940 crypto_ram2 00:43:33.940 crypto_ram3 ]] 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "928e1e7d-f372-5295-9a20-e5bd27f14170"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "928e1e7d-f372-5295-9a20-e5bd27f14170",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "b0cc1dbb-2775-58e8-937a-b0164ae15c82"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b0cc1dbb-2775-58e8-937a-b0164ae15c82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "2390905d-399b-5c6b-a399-26f87ba7674e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "2390905d-399b-5c6b-a399-26f87ba7674e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "6b3bacba-5f32-5c50-903d-83aa8c21a0ef"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "6b3bacba-5f32-5c50-903d-83aa8c21a0ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:33.940 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:33.941 ************************************ 00:43:33.941 START TEST bdev_fio_trim 00:43:33.941 ************************************ 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:33.941 17:00:30 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:34.508 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:34.508 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:34.508 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:34.508 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:34.508 fio-3.35 00:43:34.508 Starting 4 threads 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:34.508 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:34.508 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:49.424 00:43:49.424 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=1913632: Wed Jul 24 17:00:45 2024 00:43:49.424 write: IOPS=39.0k, BW=152MiB/s (160MB/s)(1522MiB/10001msec); 0 zone resets 00:43:49.424 slat (usec): min=19, max=593, avg=61.06, stdev=34.54 00:43:49.424 clat (usec): min=46, max=1407, avg=215.22, stdev=123.89 00:43:49.424 lat (usec): min=68, max=1823, avg=276.29, stdev=143.36 00:43:49.424 clat percentiles (usec): 00:43:49.424 | 50.000th=[ 192], 99.000th=[ 611], 99.900th=[ 775], 99.990th=[ 889], 00:43:49.424 | 99.999th=[ 1205] 00:43:49.424 bw ( KiB/s): min=146240, max=178464, per=100.00%, avg=156021.89, stdev=1862.32, samples=76 00:43:49.424 iops : min=36560, max=44616, avg=39005.47, stdev=465.58, samples=76 00:43:49.424 trim: IOPS=39.0k, BW=152MiB/s (160MB/s)(1522MiB/10001msec); 0 zone resets 00:43:49.424 slat (nsec): min=6361, max=77370, avg=15392.79, stdev=6060.84 00:43:49.424 clat (usec): min=7, max=1824, avg=276.56, stdev=143.38 00:43:49.424 lat (usec): min=18, max=1844, avg=291.95, stdev=145.44 00:43:49.424 clat percentiles (usec): 00:43:49.424 | 50.000th=[ 243], 99.000th=[ 725], 99.900th=[ 938], 99.990th=[ 1090], 00:43:49.424 | 99.999th=[ 1401] 00:43:49.424 bw ( KiB/s): min=146240, max=178464, per=100.00%, avg=156021.89, stdev=1862.32, samples=76 00:43:49.424 iops : min=36560, max=44616, avg=39005.47, stdev=465.58, samples=76 00:43:49.424 lat (usec) : 10=0.01%, 50=0.07%, 100=9.69%, 250=51.56%, 500=32.30% 00:43:49.424 lat (usec) : 750=5.98%, 1000=0.37% 00:43:49.424 lat (msec) : 2=0.02% 00:43:49.424 cpu : usr=99.49%, sys=0.06%, ctx=73, majf=0, minf=7677 00:43:49.424 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:49.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:49.424 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:49.424 issued rwts: total=0,389756,389757,0 short=0,0,0,0 dropped=0,0,0,0 00:43:49.424 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:49.424 00:43:49.424 Run status group 0 (all jobs): 00:43:49.424 WRITE: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1522MiB (1596MB), run=10001-10001msec 00:43:49.424 TRIM: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=1522MiB (1596MB), run=10001-10001msec 00:43:50.811 ----------------------------------------------------- 00:43:50.811 Suppressions used: 00:43:50.811 count bytes template 00:43:50.811 4 47 /usr/src/fio/parse.c 00:43:50.811 1 8 libtcmalloc_minimal.so 00:43:50.811 1 904 libcrypto.so 00:43:50.811 ----------------------------------------------------- 00:43:50.811 00:43:50.811 00:43:50.811 real 0m16.988s 00:43:50.811 user 0m56.856s 00:43:50.811 sys 0m0.855s 00:43:50.811 17:00:47 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:50.811 17:00:47 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:43:50.811 ************************************ 00:43:50.811 END TEST bdev_fio_trim 00:43:50.811 ************************************ 00:43:50.811 17:00:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:43:50.811 17:00:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:50.811 17:00:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:43:50.811 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:43:50.811 17:00:47 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:43:50.811 00:43:50.811 real 0m33.995s 00:43:50.811 user 1m53.870s 00:43:50.811 sys 0m2.021s 00:43:51.068 17:00:47 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:51.068 17:00:47 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:51.068 ************************************ 00:43:51.068 END TEST bdev_fio 00:43:51.068 ************************************ 00:43:51.068 17:00:47 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:51.068 17:00:47 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:51.068 17:00:47 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:43:51.068 17:00:47 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:51.068 17:00:47 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:51.068 ************************************ 00:43:51.068 START TEST bdev_verify 00:43:51.068 ************************************ 00:43:51.068 17:00:47 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:51.068 [2024-07-24 17:00:47.859058] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:43:51.068 [2024-07-24 17:00:47.859184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1915643 ] 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:51.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:51.326 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:51.326 [2024-07-24 17:00:48.106326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:51.894 [2024-07-24 17:00:48.448382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:51.894 [2024-07-24 17:00:48.448382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:51.894 [2024-07-24 17:00:48.470544] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:51.894 [2024-07-24 17:00:48.478564] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:51.894 [2024-07-24 17:00:48.486580] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:52.151 [2024-07-24 17:00:48.835992] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:55.455 [2024-07-24 17:00:51.672698] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:55.455 [2024-07-24 17:00:51.672766] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:55.455 [2024-07-24 17:00:51.672794] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:55.455 [2024-07-24 17:00:51.680724] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:55.455 [2024-07-24 17:00:51.680760] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:55.455 [2024-07-24 17:00:51.680777] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:55.455 [2024-07-24 17:00:51.688764] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:55.455 [2024-07-24 17:00:51.688797] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:55.455 [2024-07-24 17:00:51.688812] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:55.455 [2024-07-24 17:00:51.696779] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:55.455 [2024-07-24 17:00:51.696811] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:55.455 [2024-07-24 17:00:51.696825] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:55.455 Running I/O for 5 seconds... 00:44:00.726 00:44:00.726 Latency(us) 00:44:00.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:00.726 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x0 length 0x1000 00:44:00.726 crypto_ram : 5.07 454.31 1.77 0.00 0.00 281173.85 8126.46 189582.54 00:44:00.726 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x1000 length 0x1000 00:44:00.726 crypto_ram : 5.07 454.42 1.78 0.00 0.00 281041.90 8283.75 188743.68 00:44:00.726 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x0 length 0x1000 00:44:00.726 crypto_ram1 : 5.08 453.99 1.77 0.00 0.00 280307.60 9279.90 174483.05 00:44:00.726 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x1000 length 0x1000 00:44:00.726 crypto_ram1 : 5.07 454.32 1.77 0.00 0.00 280135.93 8860.47 173644.19 00:44:00.726 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x0 length 0x1000 00:44:00.726 crypto_ram2 : 5.05 3523.74 13.76 0.00 0.00 35978.20 9542.04 35441.87 00:44:00.726 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x1000 length 0x1000 00:44:00.726 crypto_ram2 : 5.06 3531.63 13.80 0.00 0.00 35868.38 3984.59 35441.87 00:44:00.726 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x0 length 0x1000 00:44:00.726 crypto_ram3 : 5.06 3538.14 13.82 0.00 0.00 35731.34 3905.95 35441.87 00:44:00.726 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:00.726 Verification LBA range: start 0x1000 length 0x1000 00:44:00.726 crypto_ram3 : 5.06 3539.26 13.83 0.00 0.00 35700.85 3827.30 35441.87 00:44:00.726 =================================================================================================================== 00:44:00.726 Total : 15949.81 62.30 0.00 0.00 63779.51 3827.30 189582.54 00:44:03.266 00:44:03.266 real 0m11.962s 00:44:03.266 user 0m21.965s 00:44:03.266 sys 0m0.569s 00:44:03.266 17:00:59 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:03.266 17:00:59 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:03.266 ************************************ 00:44:03.266 END TEST bdev_verify 00:44:03.266 ************************************ 00:44:03.266 17:00:59 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:03.266 17:00:59 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:44:03.266 17:00:59 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:03.266 17:00:59 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:03.266 ************************************ 00:44:03.266 START TEST bdev_verify_big_io 00:44:03.266 ************************************ 00:44:03.266 17:00:59 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:03.266 [2024-07-24 17:00:59.901414] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:03.266 [2024-07-24 17:00:59.901527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917608 ] 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:03.266 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:03.266 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:03.525 [2024-07-24 17:01:00.130621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:03.784 [2024-07-24 17:01:00.396446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.784 [2024-07-24 17:01:00.396451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:03.784 [2024-07-24 17:01:00.418311] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:03.784 [2024-07-24 17:01:00.426333] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:03.785 [2024-07-24 17:01:00.434369] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:04.043 [2024-07-24 17:01:00.801372] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:07.347 [2024-07-24 17:01:03.664081] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:07.347 [2024-07-24 17:01:03.664174] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:07.347 [2024-07-24 17:01:03.664198] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:07.347 [2024-07-24 17:01:03.672094] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:07.347 [2024-07-24 17:01:03.672133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:07.347 [2024-07-24 17:01:03.672156] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:07.347 [2024-07-24 17:01:03.680135] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:07.347 [2024-07-24 17:01:03.680174] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:07.347 [2024-07-24 17:01:03.680189] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:07.347 [2024-07-24 17:01:03.688142] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:07.347 [2024-07-24 17:01:03.688175] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:07.347 [2024-07-24 17:01:03.688190] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:07.347 Running I/O for 5 seconds... 00:44:08.016 [2024-07-24 17:01:04.826677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.827098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.827492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.827879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.827951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.828599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.832901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.833373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.833396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.833413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.833431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.836868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.836930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.836976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.837022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.837463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.837516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.837562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.837606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.837995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.838021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.838039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.838056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.841585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.841659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.841717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.841763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.842802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.846840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.847272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.847295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.847312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.016 [2024-07-24 17:01:04.847332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.850633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.850695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.850740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.850798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.851942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.855807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.856247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.856272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.856290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.856307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.859494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.859553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.859599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.859645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.860735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.863811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.863872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.863918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.863964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.864416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.864468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.864514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.864559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.864989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.865011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.865029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.865047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.868923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.869364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.869386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.869408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.869426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.872432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.872491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.872535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.872581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.017 [2024-07-24 17:01:04.873672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.876839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.876900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.876946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.876992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.877471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.877526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.278 [2024-07-24 17:01:04.877574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.877621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.877998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.878020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.878037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.878054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.881908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.882274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.882297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.882315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.882336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.885406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.885466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.885523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.885569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.886654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.889590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.889650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.889696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.889742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.890875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.893884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.893954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.894000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.894045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.894489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.894540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.894590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.894636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.895058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.895080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.895097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.895116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.898795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.899220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.899242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.899261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.899278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.902291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.902351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.902396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.902443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.902921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.902974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.903022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.903067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.903500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.903524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.903541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.903558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.906403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.906466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.906512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.906557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.906991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.907615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.279 [2024-07-24 17:01:04.910690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.910748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.910793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.910838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.911902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.914758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.914817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.914862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.914909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.915985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.918842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.918901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.918947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.918993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.919991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.922938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.923783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.924204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.924226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.924257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.924273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.927926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.928346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.928371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.928388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.928406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.931885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.932325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.932348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.932366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.932384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.935971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.936017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.936414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.936436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.936453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.936470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.280 [2024-07-24 17:01:04.939836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.940244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.940268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.940286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.940304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.943789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.944231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.944254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.944272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.944290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.946863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.946922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.946968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.947989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.950846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.951165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.951187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.951204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.951221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.953794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.953852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.953897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.953945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.954843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.956931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.956988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.957984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.958006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.958022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.958039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.961851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.963359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.964878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.965510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.967559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.969074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.970452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.970848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.971276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.971301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.971318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.971336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.975232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.976745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.977305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.978615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.980610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.982117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.982536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.982930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.983363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.983387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.983404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.983420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.987212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.987832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.989268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.281 [2024-07-24 17:01:04.990849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.992843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.993257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.993659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.994054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.994504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.994529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.994547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.994564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.997494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:04.999111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.000743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.002345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.003063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.003472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.003866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.004273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.004705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.004728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.004746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.004772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.008276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.009761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.011360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.013013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.013785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.014193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.014602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.014997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.015348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.015372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.015390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.015406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.018682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.020189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.021701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.022106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.022939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.023349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.023744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.024736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.025087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.025109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.025126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.025148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.028655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.030176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.030721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.031117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.031962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.032377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.033201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.034388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.034716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.034743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.034760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.034777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.038263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.038992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.039405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.039801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.040628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.041299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.042517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.044026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.044355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.044377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.044396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.044412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.047277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.047697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.048090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.048493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.049444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.050694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.052179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.053672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.053995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.054018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.054035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.054053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.056433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.056843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.057245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.057643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.059498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.061093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.062615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.063948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.064314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.064337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.064354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.282 [2024-07-24 17:01:05.064371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.066877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.067290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.067687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.068336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.070365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.071887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.073258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.074422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.074771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.074792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.074809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.074828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.077426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.077834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.078349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.079648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.081643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.083180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.084215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.085448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.085773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.085795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.085812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.085829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.088441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.088874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.090222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.091710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.093693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.094645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.095871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.097375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.097698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.097720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.097737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.097754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.100585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.102105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.103770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.105306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.106397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.107633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.109145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.110658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.111020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.111042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.111059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.111077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.115419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.117095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.118614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.120014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.121603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.123121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.124632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.125483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.125935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.125957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.125976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.125995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.129860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.131496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.132961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.134079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.135942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.283 [2024-07-24 17:01:05.137453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.138392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.138797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.139230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.139254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.139274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.139292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.143109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.144671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.145689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.146918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.148757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.149779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.150181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.150581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.150977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.150999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.151016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.151032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.154651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.155500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.156733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.158253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.159771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.160181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.160575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.160968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.161392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.161416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.161433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.161451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.163983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.165247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.166757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.168262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.169017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.169427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.169823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.170226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.170634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.170656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.170673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.170689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.173908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.175427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.176941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.178060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.178891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.179301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.179698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.180400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.180771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.180793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.180810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.180826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.184220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.185725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.186948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.187357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.188200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.188885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.189988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.191208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.191533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.191555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.191572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.191590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.194961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.195793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.196210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.196606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.197411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.198428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.199638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.201154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.201479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.201506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.201522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.201539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.204385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.204794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.205196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.546 [2024-07-24 17:01:05.205593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.206884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.208113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.209618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.211119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.211482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.211504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.211536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.211552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.213781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.214199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.214601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.214996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.216558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.218064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.219563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.220489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.220815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.220836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.220852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.220869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.223211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.223616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.224010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.225010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.226881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.228395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.229412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.230931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.231292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.231316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.231333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.231350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.233841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.234257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.235205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.236419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.238265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.239342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.240816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.242123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.242453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.242475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.242492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.242509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.245008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.245710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.246931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.248433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.250154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.251350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.252585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.254094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.254427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.254450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.254471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.254488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.257366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.258593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.260022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.261319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.262950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.264392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.265722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.266128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.266561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.266584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.266601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.266619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.269236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.269644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.270039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.270441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.271242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.271650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.272050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.272452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.272893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.272916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.272934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.272951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.275589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.275998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.276408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.276814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.277596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.277996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.278399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.278793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.279165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.279188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.279205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.547 [2024-07-24 17:01:05.279222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.281917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.282338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.282741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.282789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.283623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.284023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.284427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.284824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.285228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.285251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.285270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.285288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.288004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.288426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.288825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.289228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.289281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.289669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.290077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.290486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.290891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.291305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.291738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.291765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.291783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.291801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.294825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.295263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.295285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.295303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.295321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.297661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.297720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.297765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.297828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.298898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.301930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.302345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.302368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.302386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.302404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.304644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.304704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.304750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.304796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.305903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.308936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.309316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.309339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.309356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.309374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.548 [2024-07-24 17:01:05.311648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.311708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.311755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.311800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.312889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.315953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.316017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.316407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.316434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.316451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.316469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.318780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.318851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.318903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.318950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.319326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.319398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.319444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.319491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.319536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.319978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.320000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.320017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.320036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.322309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.322380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.322442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.322500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.322939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.323635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.325942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.326785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.327203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.327226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.327243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.327261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.329542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.329601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.329646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.329693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.330812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.333854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.334291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.334314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.334331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.334350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.336643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.336703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.549 [2024-07-24 17:01:05.336760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.336807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.337918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.340942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.341310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.341332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.341354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.341371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.343637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.343697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.343743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.343789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.344807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.347964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.348385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.348408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.348425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.348449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.350735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.350795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.350841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.350890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.351982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.354322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.354408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.354455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.354500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.354896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.354954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.355581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.550 [2024-07-24 17:01:05.357857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.357919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.357965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.358010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.358434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.358494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.358542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.358593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.358639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.359043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.359064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.359080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.359097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.361342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.361402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.361453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.361498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.361933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.361991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.362666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.364692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.364764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.364809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.364855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.365990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.367983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.368999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.369020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.369036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.369053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.370942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.371743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.372192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.372216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.372233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.372251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.374917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.375258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.375280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.375297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.375314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.377992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.378036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.378464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.378487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.378504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.378521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.380299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.551 [2024-07-24 17:01:05.380364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.380988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.381307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.381328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.381345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.381362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.383420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.383479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.383525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.383581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.384678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.386487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.386550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.386599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.386643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.386990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.387561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.389671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.389729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.389774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.389818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.390802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.392625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.392683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.392728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.392773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.393689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.395932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.395992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.396986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.397304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.397338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.397355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.397372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.399183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.399249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.399294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.400804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.552 [2024-07-24 17:01:05.401859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.405553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.407212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.408701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.409792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.410132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.411670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.413190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.414073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.414478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.414895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.414917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.414935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.414954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.418461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.420145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.421011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.422238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.422556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.424091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.425269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.425665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.426061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.426468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.426490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.813 [2024-07-24 17:01:05.426507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.426524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.429968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.430550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.431785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.433299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.433620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.435077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.435483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.435878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.436278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.436717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.436740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.436757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.436775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.439316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.440814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.442455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.443962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.444286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.444702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.445098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.445498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.445892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.446318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.446341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.446358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.446375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.449664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.451120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.452707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.454396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.454772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.455192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.455587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.455980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.456381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.456697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.456717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.456736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.456753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.459800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.461300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.462797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.463209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.463661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.464077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.464484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.464877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.466545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.466870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.466892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.466909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.466926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.470231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.471753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.472380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.472775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.473207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.473618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.474013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.475409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.476623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.476942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.814 [2024-07-24 17:01:05.476963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.476980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.476997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.480329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.481175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.481590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.481982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.482413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.482826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.484054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.485273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.486791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.487115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.487137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.487161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.487178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.490043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.490461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.490857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.491257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.491685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.492776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.493985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.495482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.496994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.497391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.497413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.497431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.497448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.499619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.500024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.500424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.500819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.501178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.502400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.503902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.505415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.506242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.506564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.506585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.506603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.506619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.508893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.509316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.509713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.510792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.511134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.512669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.514171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.515081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.516741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.517068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.517089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.517106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.517123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.519480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.519892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.520807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.522028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.522355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.523898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.524922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.526476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.527914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.528242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.528276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.528292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.815 [2024-07-24 17:01:05.528309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.530877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.531850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.533070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.534522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.534843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.535957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.537479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.538841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.540340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.540659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.540680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.540696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.540713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.543688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.544922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.546442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.547961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.548317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.549747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.551029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.552537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.554072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.554505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.554530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.554547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.554564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.558090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.559606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.561121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.562164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.562485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.563739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.565248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.566754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.567215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.567645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.567671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.567688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.567707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.571373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.572892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.574104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.575457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.575792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.577337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.578850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.579482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.579883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.580331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.580355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.580375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.580393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.584109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.585499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.586694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.587928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.588254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.589782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.590587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.591003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.591408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.591836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.591858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.591875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.591893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.595437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.596395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.597622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.816 [2024-07-24 17:01:05.599128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.599453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.600507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.600903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.601304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.601699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.602134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.602164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.602182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.602200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.604599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.605854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.607374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.608885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.609238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.609652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.610061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.610463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.610857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.611223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.611245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.611261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.611278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.614326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.615839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.617368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.618348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.618777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.619198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.619599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.619991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.621118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.621458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.621480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.621497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.621514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.624832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.626333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.627498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.627893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.628335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.628747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.629148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.630015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.631231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.631551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.631572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.631589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.631606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.634991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.636411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.636808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.637210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.637612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.638023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.638680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.639892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.641406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.641725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.641746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.641768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.641785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.645234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.645646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.646041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.646443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.646866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.647286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.648691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.817 [2024-07-24 17:01:05.650260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.651767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.652089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.652111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.652128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.652151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.654295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.654719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.655113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.655517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.655951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.657389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.658955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.660638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.661049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.661381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.661403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.661420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.661438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.663846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.664263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.664667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.665069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.665505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.665921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.666323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.666719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.667116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.667544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.667566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.667583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.667601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.670324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.670737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.671135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.671538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:08.818 [2024-07-24 17:01:05.671943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.672363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.672765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.673175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.673576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.674020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.674044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.674061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.674080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.676767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.677189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.677585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.677979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.678427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.678840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.679254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.679658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.680053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.680484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.680506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.680522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.680541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.683136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.683571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.683966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.684372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.684763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.685183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.685583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.685978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.686386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.686793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.686815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.686833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.686851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.689546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.689956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.690367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.690767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.691238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.691651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.692046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.692454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.692852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.693228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.693252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.693269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.693290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.695899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.696316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.696713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.697105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.697559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.697971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.698383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.698785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.699190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.699632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.699654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.699672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.699689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.702340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.702751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.702803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.703203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.703608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.704022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.704433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.704830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.705231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.705671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.705694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.705712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.705729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.708292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.708697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.709092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.709164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.709556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.709974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.710394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.710789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.711195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.711616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.711638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.711656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.711677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.714045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.080 [2024-07-24 17:01:05.714106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.714837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.715254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.715277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.715295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.715312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.717597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.717655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.717701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.717747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.718906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.721989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.722354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.722376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.722393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.722410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.724796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.724881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.724941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.724998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.725447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.725527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.725576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.725622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.725667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.726076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.726097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.726113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.726135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.728467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.728540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.728596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.728652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.729696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.731942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.732002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.732047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.732092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.732515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.732577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.081 [2024-07-24 17:01:05.732625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.732671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.732716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.733154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.733178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.733195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.733212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.735467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.735526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.735610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.735655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.736835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.739865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.740280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.740303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.740321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.740339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.742598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.742673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.742720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.742765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.743863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.746979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.747060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.747461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.747483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.747499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.747517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.749832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.749892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.749938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.749984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.750987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.753995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.754041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.754085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.754461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.754482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.754499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.754516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.756810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.756868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.756913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.756959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.757354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.082 [2024-07-24 17:01:05.757430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.757477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.757522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.757568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.757956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.757978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.757995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.758011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.760956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.761000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.761045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.761363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.761385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.761402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.761419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.763904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.764279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.764302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.764321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.764338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.766608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.766666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.766714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.766759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.767657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.769508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.769566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.769612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.769667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.769973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.770640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.772853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.772912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.772963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.773897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.775705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.775769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.775814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.775858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.776900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.779096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.083 [2024-07-24 17:01:05.779163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.779770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.780080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.780101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.780118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.780136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.781953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.782756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.783190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.783215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.783233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.783251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.785970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.786330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.786353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.786370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.786386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.788907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.789330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.789358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.789375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.789391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.791988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.792033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.792086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.792464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.792488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.792506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.792525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.794374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.794444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.794488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.794534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.794952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.795016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.795063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.795109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.795163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.084 [2024-07-24 17:01:05.795541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.795562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.795579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.795596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.797545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.797608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.797652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.797707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.798607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.800484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.800543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.800589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.800636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.801764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.803639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.803697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.803745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.803790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.804756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.806689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.806749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.806795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.806841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.807923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.809730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.809789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.809841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.809888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.810854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.812855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.812916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.813322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.813371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.813794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.813854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.813933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.813979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.814027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.814348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.814370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.814389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.814407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.816221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.816281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.816332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.817928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.818256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.818326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.818381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.818427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.085 [2024-07-24 17:01:05.818473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.818810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.818832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.818850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.818866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.822649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.824061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.825615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.827306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.827689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.828937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.830441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.831955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.833037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.833467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.833489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.833507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.833524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.836903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.838395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.839899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.840499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.840872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.842452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.843947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.845265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.845663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.846094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.846118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.846136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.846162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.849802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.851332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.851888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.853124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.853455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.855007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.856403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.856805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.857207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.857623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.857645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.857662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.857679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.861153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.861818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.863329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.864982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.865316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.866972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.867382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.867780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.868185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.868649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.868672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.868690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.868708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.871468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.873101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.874732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.876354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.876682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.877097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.877503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.877898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.878301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.878737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.878759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.878781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.878798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.882082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.883539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.885091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.886786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.887179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.887597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.888001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.888410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.888805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.889129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.889159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.889177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.889197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.892273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.893790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.895305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.895709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.896155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.896569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.896967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.897371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.898956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.899292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.086 [2024-07-24 17:01:05.899314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.899332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.899349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.902665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.904193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.904628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.905031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.905484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.905898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.906307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.907740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.909022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.909355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.909378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.909394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.909411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.912794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.913395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.913794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.914195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.914637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.915046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.916492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.917757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.919242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.919569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.919591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.919609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.919626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.922370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.922791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.923196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.923594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.924026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.925039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.926267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.927780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.929300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.929671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.929692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.929709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.929727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.931966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.932387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.932784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.933185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.933539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.934744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.936253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.937758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.087 [2024-07-24 17:01:05.938413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.938738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.938759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.938777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.938795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.941073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.941499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.941898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.942865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.943243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.944780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.946303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.947256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.948871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.949209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.949232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.949249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.949270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.951695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.952103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.953304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.954528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.954854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.956397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.957177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.958776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.960413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.960741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.960763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.960780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.960800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.963384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.964239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.965454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.966961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.967297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.968435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.969901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.971267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.972773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.973099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.973121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.973145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.973163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.976439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.977676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.979190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.980704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.981087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.982771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.984309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.985994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.987535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.987983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.988005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.988022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.988039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.350 [2024-07-24 17:01:05.991479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:05.992998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:05.994516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:05.995413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:05.995741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:05.997003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:05.998515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.000045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.000466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.000915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.000938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.000957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.000976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.004606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.006113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.006798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.008334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.008659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.010199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.011867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.012279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.012682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.013106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.013127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.013153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.013171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.016640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.017809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.019224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.020479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.020805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.022357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.022882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.023286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.023680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.024137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.024169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.024186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.024203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.027272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.028716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.029993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.031507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.031834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.032411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.032812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.033217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.033620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.034070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.034092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.034109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.034128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.037185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.038601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.039753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.040155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.040581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.040992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.041396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.042312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.043531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.043856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.043878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.043894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.043911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.047416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.048840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.049262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.049660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.050090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.050511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.050909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.051315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.051717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.052185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.052209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.052226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.052246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.054879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.055298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.055694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.056095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.056557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.056972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.057383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.057782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.058186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.058588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.058610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.058627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.058643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.061287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.351 [2024-07-24 17:01:06.061697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.062094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.062499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.062923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.063346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.063743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.064136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.064538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.064961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.064983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.065000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.065017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.067673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.068089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.068503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.068906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.069367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.069793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.070199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.070595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.071003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.071433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.071457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.071476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.071493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.074532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.074946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.075367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.075765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.076241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.076652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.077049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.077459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.077859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.078286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.078310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.078328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.078345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.080948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.081365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.081762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.082166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.082600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.083016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.083446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.083844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.084248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.084686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.084710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.084728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.084745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.087476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.087884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.088290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.088694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.089075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.089503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.089902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.090309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.090705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.091080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.091102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.091121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.091137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.093809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.094232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.094636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.095038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.095478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.095892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.096299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.096698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.097107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.097499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.097523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.097541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.097558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.100240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.100653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.101051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.101459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.101916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.102342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.102745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.103154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.103569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.103996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.104019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.104037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.104055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.106738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.107155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.352 [2024-07-24 17:01:06.107554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.107952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.108357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.108775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.109181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.109575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.109970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.110395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.110420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.110439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.110457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.113081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.113500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.113555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.113953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.114340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.114755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.115160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.115555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.115947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.116396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.116420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.116437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.116454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.119030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.119449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.119870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.119935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.120394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.120808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.121212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.121608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.122014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.122411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.122434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.122452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.122469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.124846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.124909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.124955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.125994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.126017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.126036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.126052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.128456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.128530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.128577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.128650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.129876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.132929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.133365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.133388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.133407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.133425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.135683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.135746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.135791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.135836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.136985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.139336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.139396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.139441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.139487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.353 [2024-07-24 17:01:06.139829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.139902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.139950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.139996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.140052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.140519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.140543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.140560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.140578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.142946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.143743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.144173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.144200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.144218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.144236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.146963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.147007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.147388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.147411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.147428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.147446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.149974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.150022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.150068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.150481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.150503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.150520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.150539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.152608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.152678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.152726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.152772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.153701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.155569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.155629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.155674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.155720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.156852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.354 [2024-07-24 17:01:06.158804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.158863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.158908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.158953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.159935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.161907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.161982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.162713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.163134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.163165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.163183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.163204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.165759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.166072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.166093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.166114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.166130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.168957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.169407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.169430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.169447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.169465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.171978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.172022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.172345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.172368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.172385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.172402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.174546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.174606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.174662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.174707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.175705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.177593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.177653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.177697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.177748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.178656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.355 [2024-07-24 17:01:06.180877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.180937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.180983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.181029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.181471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.181533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.181585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.181633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.181679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.182015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.182036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.182054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.182070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.184988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.185008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.185025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.185042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.187998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.188050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.188372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.188395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.188412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.188433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.190624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.190685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.190731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.190776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.191763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.194823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.195135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.195166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.195183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.195200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.197812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.198234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.198257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.198273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.198291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.200477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.200541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.200592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.200643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.200958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.201535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.203463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.203522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.356 [2024-07-24 17:01:06.203566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.203611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.203999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.204712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.206804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.357 [2024-07-24 17:01:06.206863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.618 [2024-07-24 17:01:06.206908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.618 [2024-07-24 17:01:06.206954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.618 [2024-07-24 17:01:06.207279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.618 [2024-07-24 17:01:06.207347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.618 [2024-07-24 17:01:06.207396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.618 [2024-07-24 17:01:06.207451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.207495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.207803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.207825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.207843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.207860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.209697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.209757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.209803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.209848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.210969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.213864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.214250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.214272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.214289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.214306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.216995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.217362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.217386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.217402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.217419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.219435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.219493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.219538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.219589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.219908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.219975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.220493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.222444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.222503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.222904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.222953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.223366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.223436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.223483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.223529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.223577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.223993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.224029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.224046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.224064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.225938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.225997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.226046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.227511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.227836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.227908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.227960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.228006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.228054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.228376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.228398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.228415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.228433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.231106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.232779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.234259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.235849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.236180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.619 [2024-07-24 17:01:06.236787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.238026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.239539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.241056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.241427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.241450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.241466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.241484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.245343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.246703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.248192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.249761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.250274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.251594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.253082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.254584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.255798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.256195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.256217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.256234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.256263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.259758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.261277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.262777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.263443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.263794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.265386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.266896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.268179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.268577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.269002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.269026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.269043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.269061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.272616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.274125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.274688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.275968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.276300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.277908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.279353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.279751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.280155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.280553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.280576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.280593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.280610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.284129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.284824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.286364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.288056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.288394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.290103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.290514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.290912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.291321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.291753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.291777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.291794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.291814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.294669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.296331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.297821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.299418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.299744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.300168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.300567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.300962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.301373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.301808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.301831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.301850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.301867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.305079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.306385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.307883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.309421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.309883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.310308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.310706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.311110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.311512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.311840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.311862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.311879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.311896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.314984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.316514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.318020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.318432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.318880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.319305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.620 [2024-07-24 17:01:06.319715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.320111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.321612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.321992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.322014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.322031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.322048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.325423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.326939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.327411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.327810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.328260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.328677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.329077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.330473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.331750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.332075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.332097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.332113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.332131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.335500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.336167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.336571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.336965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.337390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.337804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.338971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.340194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.341698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.342019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.342041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.342058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.342075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.344880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.345314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.345713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.346108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.346564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.347554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.348777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.350288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.351794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.352213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.352236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.352253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.352270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.354531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.354947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.355356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.355755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.356112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.357346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.358847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.360348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.361102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.361432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.361454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.361471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.361489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.363839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.364261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.364662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.365844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.366216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.367731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.369237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.370017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.371662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.371984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.372007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.621 [2024-07-24 17:01:06.372024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.372041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.374525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.374934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.375977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.377201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.377523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.379059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.379991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.381610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.383078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.383412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.383440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.383457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.383474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.386092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.387167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.388384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.389885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.390217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.391200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.392831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.394313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.395922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.396256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.396279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.396297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.396314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.399219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.400730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.402230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.403158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.403502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.405051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.406568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.407122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.407537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.407975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.407998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.408017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.408035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.411802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.413181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.414010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.415249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.415574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.417126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.418220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.418622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.419019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.419427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.419452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.419469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.419486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.422136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.422555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.422954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.423360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.423818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.424240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.424638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.425036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.425440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.425887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.425910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.425930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.425947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.428594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.429006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.429420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.429822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.430265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.430678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.431093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.431503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.431902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.432290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.432314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.432332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.432349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.435026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.435450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.435852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.436260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.436651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.437063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.437488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.437892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.438321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.438782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.438804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.438823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.622 [2024-07-24 17:01:06.438842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.441548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.441959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.442364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.442768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.443215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.443629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.444030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.444439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.444837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.445231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.445258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.445276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.445293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.447965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.448426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.448829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.449238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.449695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.450111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.450520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.450917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.451322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.451753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.451776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.451794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.451812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.454449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.454857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.455268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.455671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.456130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.456551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.456950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.457357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.457757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.458161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.458185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.458202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.458219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.460896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.461325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.461733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.462130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.462544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.462956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.463364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.463765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.464180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.464500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.464533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.464550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.464567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.467252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.467665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.468067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.469595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.470051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.470469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.623 [2024-07-24 17:01:06.471966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.624 [2024-07-24 17:01:06.472376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.624 [2024-07-24 17:01:06.472773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.624 [2024-07-24 17:01:06.473151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.624 [2024-07-24 17:01:06.473173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.624 [2024-07-24 17:01:06.473192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.624 [2024-07-24 17:01:06.473209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.477049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.477475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.477873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.478279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.478695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.479866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.480490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.480893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.481981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.482358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.883 [2024-07-24 17:01:06.482380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.482397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.482414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.485079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.485500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.486747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.487310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.487732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.488155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.488558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.489338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.490364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.490790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.490815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.490833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.490851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.493346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.494219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.495158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.495556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.495950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.497342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.497741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.498143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.498550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.498919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.498940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.498961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.498977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.501455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.501866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.502280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.502684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.503019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.503445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.503845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.505380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.505781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.506219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.506243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.506261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.506278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.508848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.510527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.510934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.511338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.511737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.512159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.513419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.513962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.514366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.514720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.514741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.514758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.514775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.518316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.518762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.518820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.519225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.519546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.520193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.520592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.520985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.521393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.521813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.521832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.521849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.521866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.524426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.524836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.525245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.525303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.525633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.526444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.526841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.527719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.528638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.529050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.529073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.529091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.529109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.531855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.532277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.532301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.532319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.532338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.534525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.534584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.534641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.534687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.535818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.538941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.884 [2024-07-24 17:01:06.539359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.539383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.539400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.539420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.541553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.541610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.541654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.541700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.542661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.544608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.544667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.544711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.544756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.545836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.547790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.547849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.547893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.547942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.548928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.550963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.551741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.552174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.552198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.552216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.552233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.554782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.555092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.555113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.555130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.555155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.557982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.558028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.558073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.558488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.558511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.558528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.558544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.560421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.560484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.560528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.560572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.560906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.560971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.561019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.561063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.561108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.561426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.885 [2024-07-24 17:01:06.561448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.561465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.561482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.563658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.563717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.563761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.563806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.564792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.566672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.566736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.566786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.566836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.567724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.570864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.571198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.571221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.571238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.571255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.573816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.574123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.574152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.574169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.574186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.576548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.576608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.576672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.576717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.577610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.579522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.579580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.579630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.579674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.579984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.580661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.583995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.584676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.585012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.886 [2024-07-24 17:01:06.585033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.585050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.585068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.588795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.588854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.588897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.588941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.589926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.593985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.594669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.595048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.595069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.595085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.595103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.598722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.598780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.598837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.598885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.599774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.603767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.603826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.603877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.603923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.604942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.609722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.609791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.609840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.609885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.610907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.615727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.616037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.616058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.616075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.616092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.620872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.621216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.621239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.887 [2024-07-24 17:01:06.621256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.621272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.626954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.627012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.627057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.627498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.627522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.627538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.627557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.631351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.631410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.631454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.631497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.631960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.632580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.636633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.636695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.636743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.636789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.637788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.642417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.642476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.642975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.643026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.643072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.643512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.643535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.643553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.647331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.647389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.647434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.647478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.652042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.652100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.652153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.652633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.652684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.652729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.656672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.656745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.658238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.658296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.658707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.658762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.658807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.658850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.661402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.661460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.661504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.661566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.663201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.663514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.663581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.663628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.663673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.888 [2024-07-24 17:01:06.663717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.666316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.666723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.667116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.667652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.667969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.668051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.669641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.669703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.671184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.671246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.672355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.672429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.673962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.674306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.674328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.674345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.674363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.676785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.677204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.678159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.679367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.679684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.681228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.682310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.683800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.685146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.685465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.685486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.685503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.685520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.688097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.689037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.690257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.691773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.692093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.693194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.694687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.696025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.697527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.697850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.697871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.697888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.697905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.701157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.702382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.703892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.705402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.705813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.707375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.708778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.710301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.711934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.712364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.712388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.712405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.712422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.715862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.717374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.718872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.719611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.719930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.721320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.722817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.724433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.724834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.725265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.725288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.725307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.725324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.729002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.730505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.731265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.732818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.733137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.734621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.736200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.736604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.737040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.737461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.737488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.737505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.737522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.741103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:09.889 [2024-07-24 17:01:06.741704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.743112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.744664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.744984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.746596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.746996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.747399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.747793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.748221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.748244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.748262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.748280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.750788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.752197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.753714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.755219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.755540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.755953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.756357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.756751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.757157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.757577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.757599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.757616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.757634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.760965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.762491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.763993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.765258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.765667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.766079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.766484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.766878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.767611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.767973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.767994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.768012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.768030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.771615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.773126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.773534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.773931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.774362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.774782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.775187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.776795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.778289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.778610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.778631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.778648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.778665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.782099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.151 [2024-07-24 17:01:06.782516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.782915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.783316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.783754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.784172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.785834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.787392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.788947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.789388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.789410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.789427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.789444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.791729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.792151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.793836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.794243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.794666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.796194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.797550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.799048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.800669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.801091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.801112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.801129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.801151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.803435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.803843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.804245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.804639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.804956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.806293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.807808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.809409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.809830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.810155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.810182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.810199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.810216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.812606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.813011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.813418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.813817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.814215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.814632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.815030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.815432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.815828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.816251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.816274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.816291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.816308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.819076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.819493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.819895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.820301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.820735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.821154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.821552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.821950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.822360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.822743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.822767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.822784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.822802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.825546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.825965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.826377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.826774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.827236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.827646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.828060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.828469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.828884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.829329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.829353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.829371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.829389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.832076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.832493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.832906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.833309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.833702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.834115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.834521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.834916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.835318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.835725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.835749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.152 [2024-07-24 17:01:06.835767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.835784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.838371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.838776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.839183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.839583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.839984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.840401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.840796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.841202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.841598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.842001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.842023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.842040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.842057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.844754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.845177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.845594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.845992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.846424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.846833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.847235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.847631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.848030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.848453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.848476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.848493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.848510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.851225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.851639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.852035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.852436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.852889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.853310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.853711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.854107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.854514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.854925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.854946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.854968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.854984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.857568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.857974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.858379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.858786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.859299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.859730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.860128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.860526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.860919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.861364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.861387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.861406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.861423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.864148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.864561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.864961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.865371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.865795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.866216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.866612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.867006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.867411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.867795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.867817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.867834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.867851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.870482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.870894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.871301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.871702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.872104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.872523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.872921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.873333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.873737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.874155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.874179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.874198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.874215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.876832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.877250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.877646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.878040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.878480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.878894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.879307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.879705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.153 [2024-07-24 17:01:06.880105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.880566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.880603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.880622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.880638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.883273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.883688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.884082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.884497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.884872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.885294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.885690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.886087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.886487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.886905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.886927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.886945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.886962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.889612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.890024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.890435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.890832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.891284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.891696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.892095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.892504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.892909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.893360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.893384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.893403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.893419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.897130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.897767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.898939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.899345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.899768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.900183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.900579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.900974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.901385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.901773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.901795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.901812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.901837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.904532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.904960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.905361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.905757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.906204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.906616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.908102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.909742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.911251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.911571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.911593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.911611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.911628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.913799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.914215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.914612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.915005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.915459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.917171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.918721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.920387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.921966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.922347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.922369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.922386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.922403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.924643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.925050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.925455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.925854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.926183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.927642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.929263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.930886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.931824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.932199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.932221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.932238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.932255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.936653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.936725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.938218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.939864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.940189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.941884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.154 [2024-07-24 17:01:06.942786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.944022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.945528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.945848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.945870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.945886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.945903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.948533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.950117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.951763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.951824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.952148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.953717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.953771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.954967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.955022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.955369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.955392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.955409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.955426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.957465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.957525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.957570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.957616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.958009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.958428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.958481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.959812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.959864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.960229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.960252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.960268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.960286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.962820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.963135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.963165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.963182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.963199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.965529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.965588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.965638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.965683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.966607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.968478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.968536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.968594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.968639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.968953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.969581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.971843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.971901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.971946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.971991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.972908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.974767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.974847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.155 [2024-07-24 17:01:06.974896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.974941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.975960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.978831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.979153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.979175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.979192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.979209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.981866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.982290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.982313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.982331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.982349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.984467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.984536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.984581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.984639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.984957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.985582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.987347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.987405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.987451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.987495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.987915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.987976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.988605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.990634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.990692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.990735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.990780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.156 [2024-07-24 17:01:06.991091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.991721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.993592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.993650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.993700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.993745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.994824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.996802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.996867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.996917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.996963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.997859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.999803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.999864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.999910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:06.999956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.000381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.000442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.000500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.000548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.000593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.001021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.001043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.001061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.001079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.002930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.002988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.003761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.004097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.004119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.004135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.004159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.006920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.007355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.007378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.007395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.157 [2024-07-24 17:01:07.007413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.009873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.419 [2024-07-24 17:01:07.010191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.010213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.010230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.010248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.012985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.013030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.013075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.013485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.013506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.013523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.013539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.015398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.015459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.015504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.015548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.015888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.015953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.016469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.018600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.018657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.018706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.018751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.019745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.021568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.021626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.021671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.021722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.022625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.024829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.024888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.024934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.024979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.025984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.026001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.027851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.027912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.027956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.028903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.420 [2024-07-24 17:01:07.031873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.031918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.032240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.032261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.032278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.032295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.034787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.035127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.035155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.035174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.035191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.037655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.037717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.037762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.037806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.038715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.040536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.040593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.040637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.040682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.040993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.041706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.043905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.043962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.044406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.044457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.044501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.044809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.044830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.044847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.046688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.046745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.046790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.046838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.047810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.050703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.051012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.051032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.051049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.052848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.052927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.054976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.055021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.055457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.055484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.055501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.057745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.057803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.057848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.421 [2024-07-24 17:01:07.057905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.059387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.059757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.059780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.059841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.061461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.061512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.063077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.063405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.063428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.063446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.065979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.067173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.068395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.069887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.070217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.070239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.070302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.070857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.070908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.072154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.072477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.072498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.072515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.074912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.075327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.076709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.077918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.078259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.078282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.079798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.080360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.081733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.083254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.083576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.083598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.083615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.086206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.087387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.088606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.090135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.090462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.090484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.091210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.092724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.094400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.095934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.096265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.096287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.096303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.099586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.100827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.102338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.103844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.104228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.104249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.105896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.107535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.109151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.110637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.111057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.111078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.111095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.114472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.115991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.117502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.118259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.118586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.118607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.120037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.121594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.123279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.123678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.124107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.124129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.124156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.127755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.129259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.130160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.131850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.132181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.132204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.133715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.135272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.135675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.136071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.136505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.136532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.136549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.140086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.141192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.422 [2024-07-24 17:01:07.142638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.143938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.144269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.144291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.145814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.146253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.146649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.147043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.147494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.147517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.147536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.149981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.151511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.153030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.153876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.154338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.154361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.154773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.155177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.155574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.156951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.157299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.157320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.157337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.160682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.162202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.163172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.163576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.164018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.164040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.164458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.164854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.166022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.167249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.167573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.167595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.167611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.171071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.172508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.172907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.173305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.173742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.173764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.174178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.174577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.174981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.175406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.175847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.175870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.175888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.178550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.178962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.179364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.179758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.180198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.180224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.180635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.181040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.181457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.181853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.182312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.182334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.182351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.184952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.185373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.185783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.186188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.186557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.186580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.186989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.187395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.187789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.188482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.188863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.188887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.188904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.191593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.192004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.192415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.192822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.193265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.193289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.193697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.194093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.194499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.194899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.423 [2024-07-24 17:01:07.195304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.195326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.195348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.197944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.198364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.198760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.199161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.199594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.199616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.200032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.200445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.200847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.201249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.201685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.201707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.201725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.204431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.204835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.205239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.205638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.206039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.206061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.206477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.206885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.207288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.207693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.208125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.208154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.208173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.210833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.211252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.211653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.212058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.212497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.212519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.212926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.213329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.213724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.214126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.214512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.214534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.214552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.217185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.217597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.217995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.218397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.218799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.218821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.219235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.219633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.220044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.220453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.220910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.220933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.220951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.223616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.224028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.224432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.224832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.225298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.225322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.225730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.226131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.226551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.226945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.227411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.227437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.227456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.230037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.230451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.230847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.231252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.231642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.424 [2024-07-24 17:01:07.231665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.232071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.232474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.232867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.233269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.233673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.233695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.233711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.236361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.236776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.237187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.237585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.238007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.238030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.238446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.238842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.239247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.239655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.240049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.240071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.240089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.242788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.243213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.243613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.244005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.244464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.244488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.244893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.245304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.245712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.246109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.246546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.246569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.246588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.249234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.249640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.250036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.250441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.250828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.250850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.251268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.251667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.252060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.252461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.252880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.252903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.252921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.255253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.255661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.256055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.256457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.256826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.256848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.257266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.257664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.258057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.258456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.258862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.258884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.258903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.261461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.261874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.262281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.262681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.263065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.263086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.264168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.264565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.265299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.266357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.266794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.266820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.266839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.269498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.271108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.272754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.274361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.274684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.274705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.275113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.275515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.275908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.276313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.276732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.276753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.425 [2024-07-24 17:01:07.276770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.280036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.281500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.283082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.284779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.285155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.285177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.286412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.286811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.287350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.288615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.289044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.289067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.289084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.291932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.293570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.295040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.296630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.296964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.296986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.297402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.297800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.298203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.298598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.299035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.299056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.299073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.302295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.303696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.305234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.306896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.307288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.307309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.308571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.308968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.309489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.310765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.311223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.311246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.311264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.314169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.315690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.317056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.318571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.318891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.318912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.319345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.319743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.320144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.320540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.320965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.320987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.321005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.324010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.325262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.326772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.328274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.328656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.328683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.330163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.330562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.330956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.332435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.332886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.332909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.332926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.334817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.335821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.337424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.338870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.339203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.339226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.340758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.341171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.341568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.341961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.342408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.342444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.342463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.345367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.346977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.348437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.688 [2024-07-24 17:01:07.348492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.348813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.348834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.350517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.350574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.351541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.351597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.351983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.352004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.352021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.354226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.354285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.354336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.354381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.354694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.354716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.356378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.356433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.357940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.357992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.358373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.358396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.358413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.360988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.361430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.361452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.361470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.363577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.363641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.363686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.363731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.364713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.366563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.366623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.366668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.366713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.367776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.369778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.369838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.369889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.369933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.370824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.372662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.372720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.372765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.372810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.373937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.376762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.377075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.689 [2024-07-24 17:01:07.377097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.377114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.378901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.378967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.379635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.380079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.380101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.380119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.382944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.383002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.383322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.383345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.383362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.385920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.386320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.386342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.386359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.388534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.388598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.388643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.388692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.389579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.391992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.392042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.392086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.392131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.392457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.392478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.392494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.394713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.394771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.394816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.394862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.395781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.397551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.397611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.397664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.397712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.398717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.401458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.401529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.401575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.401620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.401955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.401976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.402037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.690 [2024-07-24 17:01:07.402082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.402127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.402179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.402487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.402507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.402524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.404997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.405042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.405390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.405413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.405429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.407578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.407640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.407689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.407735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.408768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.410583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.410647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.410693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.410737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.411623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.413922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.413981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.414998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.415014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.416809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.416868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.416916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.416968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.417896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.419922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.419983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.420735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.421073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.421097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.421114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.422969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.423989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.424009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.424026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.691 [2024-07-24 17:01:07.426201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.426988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.427327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.427350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.427367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.429851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.430171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.430193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.430209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.432978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.433023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.433068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.433505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.433530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.433547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.435998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.436318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.436339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.436356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.438337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.438396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.438443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.438495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.438943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.438965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.439599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.441984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.442030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.442074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.442118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.442436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.442459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.442479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.444469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.444528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.444573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.444618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.692 [2024-07-24 17:01:07.444955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.444989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.445700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.447576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.447635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.447679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.447724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.448716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.451840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.452221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.452245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.452262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.454105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.454174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.454220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.455438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.455758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.455780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.455840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.457350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.457402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.458093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.458419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.458441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.458457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.462730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.464417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.465948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.466983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.467331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.467354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.467417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.468941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.468998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.470503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.470939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.470961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.470979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.474755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.476309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.477817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.479101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.479479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.479503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.480754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.482261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.483766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.484428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.484749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.484772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.484788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.489180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.490761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.492201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.493324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.493681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.493703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.495244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.496748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.497484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.497889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.498329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.498355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.498374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.501974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.503626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.504602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.505822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.506151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.506174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.507706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.508657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.510174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.510579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.511006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.511029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.511047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.693 [2024-07-24 17:01:07.516828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.518077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.519493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.520707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.521129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.521160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.521568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.521965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.522368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.523324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.523699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.523721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.523737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.527204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.528726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.530091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.531179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.531591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.531613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.532022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.532939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.533830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.534237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.534623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.534645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.534662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.540027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.541518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.541915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.542331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.542708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.542730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.543137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.543856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.694 [2024-07-24 17:01:07.545081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.546612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.546930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.546952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.546970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.550325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.550744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.552417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.552821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.553272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.553296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.554717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.555122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.555521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.555934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.556335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.556358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.556375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.561045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.561463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.561859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.563486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.563949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.563972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.564384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.564790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.565203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.565603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.566045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.566067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.566086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.569649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.570061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.570469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.571885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.572351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.572374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.572785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.573196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.573597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.573996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.574449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.574472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.574490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.577784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.578860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.579598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.579994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.580402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.580425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.580838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.581248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.581642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.582035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.582479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.582502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.582520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.584994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.955 [2024-07-24 17:01:07.585825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.586805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.587212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.587625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.587648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.588059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.588480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.588876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.589282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.589722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.589745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.589765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.595014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.595429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.595830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.596251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.596631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.596658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.597065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.597469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.597863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.598265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.598670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.598693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.598711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.602548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.602960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.603369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.603771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.604145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.604168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.604585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.604979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.605381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.605775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.606135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.606165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.606183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.609332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.609751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.610162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.610564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.610995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.611020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.611438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.611834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.612240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.612640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.613010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.613032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.613050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.615513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.615929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.616341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.616739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.617185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.617209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.617615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.618009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.618415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.618826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.619337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.619359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.619377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.624157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.624580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.624976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.625378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.625813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.625836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.626253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.626654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.627053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.628678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.629115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.629147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.629167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.631669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.632099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.632505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.632899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.633389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.633414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.633821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.634239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.634639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.636088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.636542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.636564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.636582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.956 [2024-07-24 17:01:07.639825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.640241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.640637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.641030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.641415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.641437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.641848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.642965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.643656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.644053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.644412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.644435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.644452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.646981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.647401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.647797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.648200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.648614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.648636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.649051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.649874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.650863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.651271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.651662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.651684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.651700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.655677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.656084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.656495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.656898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.657302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.657326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.658789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.659193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.659587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.661041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.661488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.661512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.661529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.663936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.664383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.664780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.665183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.665607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.665629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.666038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.666476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.667845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.668248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.668666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.668688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.668706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.673022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.673439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.674888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.675291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.675724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.675747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.676167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.676571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.678264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.678665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.679082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.679105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.679124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.682804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.684349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.685712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.686917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.687293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.687316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.688854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.690372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.691040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.692688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.693146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.693170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.693187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.697408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.698788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.700006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.701241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.701561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.701583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.703118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.703804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.705481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.705876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.706308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.706332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.706350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.710036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.957 [2024-07-24 17:01:07.711567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.712875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.714132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.714487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.714510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.716049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.717567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.718196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.719791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.720258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.720281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.720312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.724575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.725884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.727129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.728375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.728699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.728722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.730265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.730892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.732499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.732896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.733336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.733360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.733377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.737045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.738615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.740014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.741157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.741502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.741525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.743064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.744576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.745342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.746995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.747445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.747468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.747485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.751717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.753341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.754298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.755529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.755850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.755872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.757412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.758357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.759834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.760245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.760688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.760719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.760737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.764166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.765771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.767401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.768331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.768695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.768717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.770255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.771768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.772731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.774165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.774610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.774632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.774649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.777337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.778981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.780470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.781542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.781887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.781910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.783434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.784925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.785776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.787352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.787770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.787794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.787811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.791786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.793457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.794961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.795020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.795421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.795444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.796809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.796863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.798374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.798426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.798740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.798762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.798780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.802074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.958 [2024-07-24 17:01:07.802135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.802188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.802234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.802625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.802646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.803876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.803930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.805440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.805492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.805809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.805835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.805852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.807704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.807764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.807810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.807854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.808803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.811587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.811645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.811690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.811735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:10.959 [2024-07-24 17:01:07.812637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.814452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.814522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.814567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.814613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.814977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.814998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.815725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.818485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.818544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.818588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.818633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.818943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.818964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.819681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.821511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.226 [2024-07-24 17:01:07.821573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.821619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.821663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.821980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.822655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.827724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.828032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.828051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.828067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.829902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.829962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.830710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.831121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.831151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.831168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.835652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.835711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.835757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.835802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.836836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.838654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.838715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.838761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.838806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.839823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.843505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.843571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.843618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.843664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.843980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.844563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.846453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.846512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.846562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.846609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.847621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.851424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.851489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.851534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.851579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.851934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.851954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.852513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.854539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.854599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.854644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.854692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.855757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.859647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.859705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.859750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.859804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.860690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.862822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.862882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.862936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.862985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.863980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.868474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.868535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.868580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.868625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.868937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.868959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.869515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.871642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.871701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.871745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.227 [2024-07-24 17:01:07.871790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.872930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.877962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.878312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.878335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.878352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.880400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.880459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.880509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.880553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.880962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.880985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.881552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.885880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.886263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.886285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.886302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.888436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.888496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.888541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.888586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.889632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.894524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.894590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.894635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.894680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.895617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.897813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.897872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.897923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.897969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.898900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.903596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.903655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.903705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.903756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.904861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.907823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.908134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.908162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.908179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.913801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.914216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.914239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.914257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.916968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.917017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.917061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.917382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.917403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.917420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.921887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.921946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.921991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.922656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.923084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.923106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.923123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.925885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.926208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.926231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.926247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.930806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.930868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.930918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.930967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.931391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.931415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.931476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.931521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.931566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.228 [2024-07-24 17:01:07.931610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.931968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.931990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.932006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.935805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.935868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.935913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.937156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.937476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.937498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.937559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.939076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.939128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.939916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.940249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.940270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.940288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.944412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.945999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.947655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.948562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.948927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.948950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.949012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.950541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.950593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.952102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.952486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.952508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.952524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.959176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.960689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.962199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.963464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.963829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.963851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.965093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.966588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.968093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.968690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.969017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.969039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.969055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.973424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.975032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.975605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.977106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.977437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.977460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.978666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.979913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.980459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.980858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.981217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.981241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.981258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.985475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.986725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.988241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.989748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.990095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.990117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.991188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.991912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.992315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.993316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.993682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.993703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.993720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:07.998880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.000400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.001903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.003122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.003484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.003508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.004324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.004722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.005681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.006515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.006958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.006984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.007002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.013056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.014735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.016228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.016749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.017074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.017097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.017516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.017914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.019464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.019859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.020286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.020310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.020328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.023503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.023912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.024328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.024794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.025116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.025147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.025556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.025950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.027581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.027980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.028430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.028454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.028472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.031620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.032032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.032449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.032866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.033203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.033226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.033733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.034134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.035348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.035934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.036373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.036397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.036415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.040160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.040565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.040961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.041372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.041724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.041746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.042733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.043131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.043936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.044936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.045359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.045382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.045399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.049937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.050356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.050754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.051166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.051653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.051676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.053073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.053486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.053927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.055295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.229 [2024-07-24 17:01:08.055729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.055752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.055770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.061091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.061507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.061905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.062318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.062772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.062795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.064501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.064902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.065303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.066997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.067462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.067485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.067502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.073439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.073860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.074283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.074685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.075104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.075128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.076575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.076977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.077385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.078815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.079289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.079316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.079333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.084821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.230 [2024-07-24 17:01:08.085244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.085642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.086040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.086435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.086459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.087623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.088275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.088670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.089765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.090166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.489 [2024-07-24 17:01:08.090189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.090206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.094947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.095650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.096047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.096455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.096836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.096858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.097603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.098659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.099055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.099676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.099997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.100019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.100036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.103609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.104972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.105379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.105782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.106193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.106217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.106630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.108295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.108688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.109081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.109413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.109435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.109452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.112742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.114214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.114615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.115010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.115395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.115418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.115826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.116879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.117622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.118020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.118389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.118411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.118427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.122124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.122793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.123937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.124341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.124749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.124771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.125192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.125596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.127132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.127533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.127963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.127986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.128003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.132580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.132997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.134492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.134896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.135341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.135364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.135769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.136179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.137241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.137978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.138417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.138441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.138459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.142930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.143341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.144033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.145144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.145585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.145608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.146017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.146430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.146834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.148364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.148801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.148823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.148845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.153322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.153743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.154137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.155708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.156130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.156159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.156565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.157426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.158355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.159391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.159841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.490 [2024-07-24 17:01:08.159864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.159881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.165997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.166418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.167795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.168226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.168657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.168680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.169761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.170480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.170875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.171278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.171684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.171706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.171723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.176162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.176574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.177418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.178386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.178808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.178831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.179475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.180639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.181032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.181436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.181864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.181885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.181902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.185928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.187153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.188583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.190086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.190515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.190538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.192091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.193779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.195337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.196768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.197131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.197158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.197175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.202175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.203687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.205189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.205945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.206277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.206299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.207741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.209318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.210979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.211773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.212101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.212123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.212146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.216358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.217883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.218715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.220361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.220685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.220706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.222249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.223890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.224578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.225696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.226125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.226154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.226172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.231107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.232071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.233682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.235159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.235484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.235505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.237045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.237595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.238842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.239246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.239656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.239678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.239695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.245316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.246761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.248044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.249562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.249886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.249907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.250406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.251833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.252237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.252630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.252949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.252970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.491 [2024-07-24 17:01:08.252987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.257975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.259230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.260744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.262262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.262646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.262669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.264222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.264617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.265009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.266608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.267054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.267077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.267094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.272089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.273605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.275105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.275590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.275920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.275942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.276359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.276754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.278394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.278796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.279236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.279263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.279282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.285642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.287151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.287879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.289505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.289953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.289976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.290391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.291787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.292207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.292602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.292931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.292952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.292969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.296946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.298480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.299147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.300796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.301272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.301296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.301707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.303234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.303636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.304034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.304361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.304396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.304413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.310110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.310863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.312552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.312609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.313052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.313077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.313491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.313550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.315063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.315113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.315543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.315566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.315583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.319543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.319609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.319660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.319706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.320022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.320043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.321591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.321645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.322646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.322697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.323019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.323040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.323056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.325905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.325971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.492 [2024-07-24 17:01:08.326963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.330904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.330964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.331723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.332089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.332110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.332126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.336954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.337000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.337345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.337367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.337384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.342924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.343349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.343372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.343389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.348131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.348196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.493 [2024-07-24 17:01:08.348242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.754 [2024-07-24 17:01:08.348844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.349195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.349218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.349234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.354691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.354754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.354800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.354844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.355768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.360731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.360791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.360844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.360890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.361960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.365785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.365851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.365896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.365940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.366881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.371872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.372194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.372216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.372233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.377996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.378041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.378088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.378511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.378533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.378549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.382993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.383037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.383089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.383411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.383432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.383449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.387604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.387663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.387708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.387752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.388075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.388096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.388168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.388216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.388275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.755 [2024-07-24 17:01:08.388320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.388634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.388655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.388671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.393990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.394036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.394467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.394490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.394508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.398977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.399031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.399079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.399125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.399443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.399464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.399481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.403860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.404175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.404197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.404214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.408355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.408415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.408462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.408507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.408938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.408960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.409612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.413720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.413790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.413834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.413879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.414771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.417757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.417815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.417860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.417904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.418819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.423546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.423611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.423661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.423706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.756 [2024-07-24 17:01:08.424771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.424797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.424814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.429828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.429886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.429930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.429975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.430897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.433794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.433861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.433909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.433954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.434890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.439450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.439509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.439555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.439607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.440739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.444981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.445026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.445071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.445115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.445548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.445570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.445588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.448831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.448889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.448934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.448979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.449911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.453604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.453663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.453707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.453751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.454751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.459405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.459465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.459514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.459560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.459983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.757 [2024-07-24 17:01:08.460608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.460630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.464523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.464582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.464632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.464676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.465604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.469924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.469983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.470988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.471005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.475745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.475803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.475848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.475896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.476994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.482775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.483084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.483105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.483123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.486086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.486151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.486198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.487730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.488052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.488074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.488137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.488560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.488618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.489872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.490196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.490219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.490235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.494700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.495113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.495519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.495916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.496342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.496366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.496425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.496819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.496867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.497271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.497645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.497666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.497684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.501070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.501485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.501880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.502283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.502680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.502702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.503111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.503521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.503916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.504316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.504762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.504785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.504808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.758 [2024-07-24 17:01:08.508231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.508650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.509057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.509462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.509901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.509925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.510339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.510735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.511131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.511537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.511960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.511982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.511999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.515431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.515837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.516241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.516640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.517152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.517175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.517584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.517981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.518381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.518774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.519207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.519233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.519251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.522780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.523215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.523626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.524020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.524463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.524485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.524893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.525298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.525698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.526093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.526523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.526546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.526564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.530016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.530430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.530829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.531238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.531650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.531672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.532081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.532482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.532876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.533282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.533642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.533663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.533681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.537316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.537725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.538121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.538537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.538972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.538995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.539411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.539812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.540217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.540616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.541035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.541057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.541073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.544447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.544875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.545290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.545690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.546113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.546136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.546550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.546945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.547351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.759 [2024-07-24 17:01:08.547760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.548146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.548170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.548188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.551624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.552030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.552445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.552840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.553225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.553248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.553656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.554053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.554458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.554852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.555288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.555312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.555330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.558729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.559151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.559554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.559951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.560347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.560370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.560778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.561187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.561606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.562030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.562465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.562489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.562507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.565997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.566413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.566809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.567220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.567648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.567671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.568079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.568483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.568877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.569279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.569656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.569678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.569695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.573406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.573837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.574255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.574652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.575094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.575122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.575534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.575928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.576333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.576741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.577152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.577175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.577193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.580606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.581013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.581415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.581814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.582243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.582266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.582676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.583070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.583471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.583866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.584319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.584342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.584360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.590018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.591598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.592002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.592404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.592823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.592845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.593258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.593653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.595120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.596731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.597049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.597071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.597088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.602246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.602653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.603048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.603454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.603775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.603799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.605373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.607053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.608591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.609621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.609962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.760 [2024-07-24 17:01:08.609984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:11.761 [2024-07-24 17:01:08.610001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.612416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.612828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.613258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.614628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.614947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.614968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.616631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.618116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.619183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.620403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.620719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.620742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.620760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.623301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.623725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.625145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.626703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.627025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.627047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.628585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.629659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.630892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.632409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.632730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.632751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.632768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.635430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.636895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.638501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.640005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.640336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.640358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.641351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.642584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.644101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.645619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.646057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.646079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.646096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.650255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.651921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.653433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.654825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.655178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.655205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.656443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.657953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.659466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.660178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.660621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.660644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.660661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.664415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.666026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.667480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.023 [2024-07-24 17:01:08.668602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.668945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.668967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.670479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.671977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.672783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.673202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.673636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.673659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.673677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.677429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.678970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.680016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.681257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.681574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.681596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.683136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.684001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.684408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.684802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.685241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.685264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.685280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.688798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.689690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.690914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.692426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.692746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.692767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.693833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.694237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.694631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.695023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.695478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.695501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.695519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.697878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.699128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.700646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.702158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.702534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.702556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.702965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.703369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.703762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.704162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.704528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.704549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.704566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.707724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.709237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.710743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.711910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.712336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.712359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.712770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.713174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.713569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.714566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.714924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.714945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.714963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.718328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.719846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.721066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.721467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.721897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.721921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.722337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.722736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.723642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.724868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.725195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.725217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.725234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.728583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.729899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.730304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.730700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.731117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.731147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.731561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.732475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.733703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.735182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.735502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.735523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.735554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.738741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.024 [2024-07-24 17:01:08.739156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.739550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.739945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.740374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.740398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.741191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.742410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.743892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.745386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.745753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.745774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.745791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.747924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.748346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.748743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.749137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.749557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.749579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.750840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.752651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.754123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.754926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.755254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.755280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.755297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.757582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.757988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.758392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.759464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.759809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.759831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.761372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.762905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.763707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.765328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.765644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.765665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.765681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.768100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.768515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.769566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.770786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.771110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.771132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.772669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.773524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.775217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.776763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.777079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.777102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.777118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.779442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.779851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.781325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.782625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.782950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.782972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.784509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.785073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.786344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.787841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.788165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.788187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.788204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.790772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.792126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.793352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.793406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.793729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.793750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.795290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.795347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.796163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.796215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.796557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.796579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.796595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.798606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.798664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.798710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.798755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.799159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.799182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.799589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.799643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.800590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.800642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.800977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.800998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.801016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.802859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.025 [2024-07-24 17:01:08.802918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.802962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.803902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.806914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.807233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.807258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.807275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.809793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.810229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.810252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.810269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.812582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.812640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.812684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.812729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.813660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.815511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.815568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.815613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.815677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.815991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.816671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.818890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.818954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.819939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.821796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.821855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.821905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.821949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.822415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.822437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.822500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.822546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.822595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.822640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.823050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.026 [2024-07-24 17:01:08.823073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.823090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.825967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.826305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.826327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.826344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.828872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.829293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.829316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.829339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.831466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.831524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.831570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.831615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.831926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.831948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.832610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.834441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.834501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.834552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.834598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.835671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.837696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.837754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.837798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.837850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.838842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.840982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.841789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.842097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.842117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.842134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.843976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.844033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.844078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.844130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.027 [2024-07-24 17:01:08.844451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.844471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.844529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.844585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.844640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.844684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.844992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.845013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.845031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.847997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.848042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.848411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.848433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.848449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.850993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.851309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.851337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.851354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.853684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.853743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.853788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.853834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.854728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.856554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.856623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.856685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.856732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.857711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.860977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.861023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.861068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.861386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.861408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.861424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.863294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.863353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.028 [2024-07-24 17:01:08.863397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.863979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.864553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.864576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.864593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.866926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.866996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.867719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.868137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.868169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.868187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.870481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.870561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.870619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.870664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.871731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.874862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.875249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.875271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.875288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.877539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.877602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.877648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.877693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.029 [2024-07-24 17:01:08.878841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.881871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.882307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.882330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.882347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.884695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.884754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.884799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.884843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.885921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.290 [2024-07-24 17:01:08.888887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.888946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.889002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.889048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.889432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.889454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.889470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.891766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.891825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.891871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.891930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.892306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.892328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.892401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.892459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.892530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.892587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.893010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.893031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.893048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.895960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.896008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.896052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.896111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.896542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.896565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.896582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.899974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.900019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.900064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.900474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.900496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.900512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.902766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.902829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.902875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.903278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.903716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.903738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.903796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.904198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.904248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.904646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.905034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.905055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.905073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.907809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.908229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.908629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.909023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.909424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.909446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.909508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.909900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.909949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.910354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.910741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.910763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.910780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.913425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.913835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.914241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.914635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.915086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.915107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.915524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.915924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.916336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.916734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.917170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.917194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.917213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.919821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.920238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.920634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.291 [2024-07-24 17:01:08.921027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.921431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.921454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.921867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.922277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.922670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.923062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.923490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.923513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.923532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.926208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.926617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.927015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.927438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.927846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.927868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.928284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.928681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.929073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.929473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.929864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.929887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.929905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.932536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.932947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.933360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.933758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.934203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.934226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.934632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.935025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.935430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.935830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.936261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.936283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.936300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.938969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.939387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.939783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.940183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.940620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.940642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.941051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.941459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.941856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.942256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.942656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.942678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.942695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.946089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.946498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.946897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.947295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.947746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.947768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.948189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.948593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.948991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.949391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.949803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.949826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.949843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.952453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.952864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.953269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.953666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.954046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.954069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.954487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.954885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.955284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.955682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.956108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.956131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.956157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.958854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.959268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.959672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.960073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.960540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.960575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.960984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.961389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.961786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.963068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.963414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.963436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.963453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.966806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.292 [2024-07-24 17:01:08.968339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.969343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.969742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.970183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.970208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.970614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.971006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.972144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.973366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.973683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.973705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.973721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.977039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.978253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.978656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.979051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.979428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.979451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.979860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.980578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.981797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.983311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.983629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.983654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.983671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.987100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.987522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.987933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.988336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.988782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.988805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.989483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.990697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.992198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.993718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.994061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.994084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.994100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.996227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.996634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.997031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.997438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.997874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.997896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:08.999430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.001107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.002643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.004072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.004435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.004456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.004474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.006678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.007084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.007487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.007885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.008212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.008235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.009684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.011254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.012940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.013806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.014156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.014179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.014195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.016510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.016931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.017339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.018829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.019215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.019237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.020763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.022282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.022834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.024097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.024421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.024443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.024459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.026854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.027272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.028655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.029874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.293 [2024-07-24 17:01:09.030200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.030224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.031742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.032315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.033694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.035212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.035533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.035554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.035572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.038127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.039071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.040294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.041809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.042128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.042155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.043181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.044741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.046137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.047661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.047979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.048000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.048016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.051082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.052313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.053824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.055190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.055529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.055550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.056789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.058032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.059545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.061044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.061487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.061510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.061530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.065390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.067051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.068576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.069982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.070341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.070363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.071611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.073114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.074624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.075421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.075847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.075870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.075887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.079560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.081195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.082823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.083751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.084110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.084132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.085652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.087151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.088132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.088545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.088981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.089005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.089023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.092545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.094064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.094632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.095866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.096195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.096218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.097749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.099085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.099489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.099883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.100303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.100326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.100342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.103785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.104391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.105802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.107349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.107669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.107691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.109250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.109650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.110046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.110451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.110887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.110910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.110928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.113885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.115392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.116742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.294 [2024-07-24 17:01:09.118257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.118580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.118602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.119031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.119435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.119838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.120241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.120675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.120697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.120715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.123728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.124982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.126477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.127990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.128375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.128399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.128811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.129215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.129615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.130015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.130342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.130365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.130382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.133421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.134893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.136423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.137116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.137546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.137570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.137979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.138382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.138777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.140167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.140506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.140528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.140549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.143888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.145417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.146297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.146698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.147135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.147166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.147575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.295 [2024-07-24 17:01:09.147974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.148966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.150201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.150525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.150546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.150563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.153901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.155128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.155534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.155930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.156344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.156368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.156774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.157710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.158929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.160431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.160751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.160772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.160789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.164042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.164467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.164866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.165274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.165705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.165729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.166303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.167486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.168986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.170513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.170836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.170859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.170875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.172994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.173414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.173810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.174212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.174642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.174663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.176191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.177836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.179356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.180755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.181102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.181124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.181161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.183367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.183773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.184185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.184580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.184904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.184925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.558 [2024-07-24 17:01:09.186218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.187724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.188765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.189991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.190373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.190397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.190415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.192494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.192904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.193307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.193999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.194393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.194416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.195930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.197427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.198683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.199966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.200322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.200358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.200374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.202784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.203206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.203603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.203656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.203976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.203997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.205694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.205748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.207261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.207312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.207655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.207678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.207695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.209463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.209523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.209568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.209615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.210038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.210060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.210474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.210525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.210921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.210970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.211398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.211421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.211439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.213886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.214206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.214229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.214246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.216967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.217014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.217059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.217105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.217530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.217552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.217568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.219961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.220010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.220055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.220099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.220459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.220481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.220498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.222881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.222942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.222991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.223035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.223388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.223410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.223472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.559 [2024-07-24 17:01:09.223519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.223563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.223612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.223926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.223947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.223964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.225796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.225855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.225899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.225944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.226896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.229931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.230250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.230272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.230288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.232839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.233239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.233262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.233279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.235437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.235498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.235548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.235596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.235913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.235934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.236518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.238979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.239045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.239092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.239136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.239575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.239597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.239614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.241749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.241811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.241855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.241900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.242787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.244640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.244699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.244748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.244794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.245215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.245237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.245297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.245344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.245389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.560 [2024-07-24 17:01:09.245434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.245867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.245889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.245907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.247976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.248681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.249020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.249041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.249058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.250810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.250883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.250929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.250975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.251409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.251431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.251496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.251543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.251588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.251632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.252063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.252087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.252105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.254356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.254428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.254478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.254524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.254938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.254960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.255611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.257833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.257893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.257944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.257989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.258408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.258431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.258486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.258534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.258584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.258640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.259074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.259096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.259113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.261391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.261450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.261495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.261541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.261938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.261959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.262630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.264850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.264910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.264967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.265695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.266105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.266126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.266155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.268372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.268431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.268475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.268522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.268939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.268961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.269015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.269081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.269127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.561 [2024-07-24 17:01:09.269192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.269557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.269581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.269598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.271884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.271944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.271995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.272700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.273064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.273086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.273104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.275396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.275455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.275501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.275561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.275986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.276692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.279805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.280233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.280255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.280272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.282535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.282593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.282650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.282697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.283775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.286830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.287271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.287294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.287311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.289653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.289713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.289757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.289802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.562 [2024-07-24 17:01:09.290883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.293939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.294371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.294399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.294416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.296728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.296787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.296832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.296877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.297968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.300985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.301030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.301421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.301444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.301461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.303662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.303720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.303765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.303815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.304909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.307981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.308376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.308399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.308416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.311826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.312254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.312277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.312294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.314615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.314687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.314733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.563 [2024-07-24 17:01:09.315135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.315560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.315582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.315642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.316038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.316093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.316495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.316895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.316917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.316934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.319671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.320080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.320490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.320892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.321329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.321353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.321414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.321812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.321866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.322268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.322686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.322707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.322728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.325415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.325826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.326235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.326638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.327057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.327079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.327502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.327899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.328303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.328705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.329135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.329170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.329187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.332007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.332430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.332833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.333237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.333689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.333712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.334118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.335080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.335923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.337059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.337429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.337451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.337470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.340289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.340698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.341098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.341503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.341911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.341933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.342349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.342750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.343157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.343554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.343981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.344003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.344021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.346702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.347110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.347517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.347915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.348329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.348352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.348762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.349174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.349573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.349965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.350409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.350433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.350450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.353067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.353488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.353889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.355363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.355748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.355769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.357309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.358816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.359398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.360652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.360977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.360998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.361015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.363561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.363969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.365344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.366604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.564 [2024-07-24 17:01:09.366929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.366951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.368494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.369070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.370374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.371885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.372219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.372242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.372258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.374810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.376003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.377231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.378725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.379050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.379072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.379828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.381350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.383002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.384521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.384843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.384865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.384883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.388170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.389414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.390912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.392427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.392816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.392837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.394423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.396103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.397683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.399144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.399516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.399538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.399555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.402954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.404480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.405988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.406928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.407263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.407285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.408568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.410069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.411610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.412014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.412486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.412509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.565 [2024-07-24 17:01:09.412526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.416206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.417720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.418760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.420292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.420637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.420659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.422211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.423726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.424129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.424532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.424964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.424987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.425004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.428784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.430222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.431367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.432596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.432922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.432944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.434450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.435227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.435641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.436043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.436450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.436473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.436490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.440080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.441027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.442272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.443774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.444100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.444123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.445147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.445551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.445948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.446361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.446800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.446823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.446841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.449165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.450481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.451990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.453513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.453837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.453860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.454280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.454678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.455073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.455481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.455888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.455910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.455927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.459398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.461088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.462618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.464011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.824 [2024-07-24 17:01:09.464397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.464420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.464831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.465241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.465637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.466165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.466482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.466505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.466521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.469522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.471049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.472629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.473033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.473480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.473503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.473912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.474324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.474721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.476420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.476737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.476759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.476776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.480110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.481622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.482091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.482496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.482939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.482962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.483388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.483792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.485234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.486513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.486835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.486858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.486875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.491297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.491708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:12.825 [2024-07-24 17:01:09.492137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:13.410 00:44:13.411 Latency(us) 00:44:13.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:13.411 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x0 length 0x100 00:44:13.411 crypto_ram : 6.03 42.47 2.65 0.00 0.00 2935970.20 275146.34 2550136.83 00:44:13.411 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x100 length 0x100 00:44:13.411 crypto_ram : 5.93 43.19 2.70 0.00 0.00 2879982.80 310378.50 2362232.01 00:44:13.411 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x0 length 0x100 00:44:13.411 crypto_ram1 : 6.03 42.46 2.65 0.00 0.00 2840615.32 273468.62 2362232.01 00:44:13.411 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x100 length 0x100 00:44:13.411 crypto_ram1 : 5.93 43.18 2.70 0.00 0.00 2785945.19 310378.50 2174327.19 00:44:13.411 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x0 length 0x100 00:44:13.411 crypto_ram2 : 5.57 264.93 16.56 0.00 0.00 434016.26 49492.79 640889.65 00:44:13.411 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x100 length 0x100 00:44:13.411 crypto_ram2 : 5.57 285.35 17.83 0.00 0.00 403499.31 36700.16 617401.55 00:44:13.411 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x0 length 0x100 00:44:13.411 crypto_ram3 : 5.66 273.00 17.06 0.00 0.00 409704.92 5347.74 375809.64 00:44:13.411 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:13.411 Verification LBA range: start 0x100 length 0x100 00:44:13.411 crypto_ram3 : 5.65 294.53 18.41 0.00 0.00 381137.73 12373.20 473117.49 00:44:13.411 =================================================================================================================== 00:44:13.411 Total : 1289.12 80.57 0.00 0.00 750656.30 5347.74 2550136.83 00:44:16.697 00:44:16.697 real 0m13.158s 00:44:16.697 user 0m24.332s 00:44:16.697 sys 0m0.663s 00:44:16.697 17:01:12 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:16.697 17:01:12 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:16.697 ************************************ 00:44:16.697 END TEST bdev_verify_big_io 00:44:16.697 ************************************ 00:44:16.697 17:01:13 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:16.697 17:01:13 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:44:16.697 17:01:13 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:16.697 17:01:13 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:16.697 ************************************ 00:44:16.697 START TEST bdev_write_zeroes 00:44:16.697 ************************************ 00:44:16.697 17:01:13 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:16.697 [2024-07-24 17:01:13.131452] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:16.697 [2024-07-24 17:01:13.131533] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919732 ] 00:44:16.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.697 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:16.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.697 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:16.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.697 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:16.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.697 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:16.698 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:16.698 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:16.698 [2024-07-24 17:01:13.325640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:16.955 [2024-07-24 17:01:13.597563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:16.955 [2024-07-24 17:01:13.619360] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:16.955 [2024-07-24 17:01:13.627379] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:16.955 [2024-07-24 17:01:13.635388] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:17.213 [2024-07-24 17:01:14.024133] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:20.502 [2024-07-24 17:01:16.832994] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:20.502 [2024-07-24 17:01:16.833092] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:20.502 [2024-07-24 17:01:16.833116] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:20.502 [2024-07-24 17:01:16.841008] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:20.502 [2024-07-24 17:01:16.841052] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:20.502 [2024-07-24 17:01:16.841068] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:20.502 [2024-07-24 17:01:16.849045] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:20.502 [2024-07-24 17:01:16.849079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:20.502 [2024-07-24 17:01:16.849095] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:20.502 [2024-07-24 17:01:16.857047] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:20.502 [2024-07-24 17:01:16.857079] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:20.502 [2024-07-24 17:01:16.857093] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:20.502 Running I/O for 1 seconds... 00:44:21.434 00:44:21.434 Latency(us) 00:44:21.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:21.434 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:21.434 crypto_ram : 1.03 1890.49 7.38 0.00 0.00 67098.53 6868.17 82208.36 00:44:21.434 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:21.434 crypto_ram1 : 1.03 1903.66 7.44 0.00 0.00 66256.20 6501.17 75916.90 00:44:21.434 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:21.434 crypto_ram2 : 1.02 14577.12 56.94 0.00 0.00 8635.08 2634.55 11481.91 00:44:21.434 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:21.434 crypto_ram3 : 1.02 14557.24 56.86 0.00 0.00 8605.97 2634.55 9070.18 00:44:21.434 =================================================================================================================== 00:44:21.434 Total : 32928.51 128.63 0.00 0.00 15340.38 2634.55 82208.36 00:44:23.959 00:44:23.959 real 0m7.693s 00:44:23.959 user 0m7.138s 00:44:23.959 sys 0m0.499s 00:44:23.959 17:01:20 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:23.959 17:01:20 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:23.959 ************************************ 00:44:23.959 END TEST bdev_write_zeroes 00:44:23.959 ************************************ 00:44:23.959 17:01:20 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:23.959 17:01:20 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:44:23.959 17:01:20 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:23.959 17:01:20 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:23.959 ************************************ 00:44:23.959 START TEST bdev_json_nonenclosed 00:44:23.959 ************************************ 00:44:23.959 17:01:20 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:24.217 [2024-07-24 17:01:20.916073] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:24.217 [2024-07-24 17:01:20.916198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920843 ] 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:24.217 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:24.217 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:24.473 [2024-07-24 17:01:21.141130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.730 [2024-07-24 17:01:21.407497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:24.730 [2024-07-24 17:01:21.407592] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:24.730 [2024-07-24 17:01:21.407618] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:24.730 [2024-07-24 17:01:21.407633] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:25.293 00:44:25.293 real 0m1.167s 00:44:25.293 user 0m0.883s 00:44:25.293 sys 0m0.279s 00:44:25.293 17:01:21 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:25.293 17:01:21 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:25.293 ************************************ 00:44:25.293 END TEST bdev_json_nonenclosed 00:44:25.293 ************************************ 00:44:25.293 17:01:22 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:25.293 17:01:22 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:44:25.293 17:01:22 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:25.293 17:01:22 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:25.293 ************************************ 00:44:25.293 START TEST bdev_json_nonarray 00:44:25.293 ************************************ 00:44:25.293 17:01:22 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:25.550 [2024-07-24 17:01:22.161094] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:25.550 [2024-07-24 17:01:22.161224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921102 ] 00:44:25.550 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.550 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:25.551 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:25.551 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:25.551 [2024-07-24 17:01:22.387309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.115 [2024-07-24 17:01:22.675737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.115 [2024-07-24 17:01:22.675833] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:26.115 [2024-07-24 17:01:22.675860] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:26.115 [2024-07-24 17:01:22.675876] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:26.679 00:44:26.679 real 0m1.176s 00:44:26.679 user 0m0.907s 00:44:26.679 sys 0m0.261s 00:44:26.679 17:01:23 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:26.679 17:01:23 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:26.679 ************************************ 00:44:26.679 END TEST bdev_json_nonarray 00:44:26.679 ************************************ 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:44:26.679 17:01:23 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:44:26.679 00:44:26.679 real 1m47.942s 00:44:26.679 user 3m42.577s 00:44:26.679 sys 0m11.120s 00:44:26.679 17:01:23 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:26.679 17:01:23 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:26.679 ************************************ 00:44:26.679 END TEST blockdev_crypto_qat 00:44:26.679 ************************************ 00:44:26.679 17:01:23 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:44:26.679 17:01:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:26.679 17:01:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:26.680 17:01:23 -- common/autotest_common.sh@10 -- # set +x 00:44:26.680 ************************************ 00:44:26.680 START TEST chaining 00:44:26.680 ************************************ 00:44:26.680 17:01:23 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:44:26.680 * Looking for test storage... 00:44:26.680 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@7 -- # uname -s 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:44:26.680 17:01:23 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:26.680 17:01:23 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:26.680 17:01:23 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:26.680 17:01:23 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.680 17:01:23 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.680 17:01:23 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.680 17:01:23 chaining -- paths/export.sh@5 -- # export PATH 00:44:26.680 17:01:23 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@47 -- # : 0 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:44:26.680 17:01:23 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:26.680 17:01:23 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:26.680 17:01:23 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:26.680 17:01:23 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:44:26.680 17:01:23 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@296 -- # e810=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@297 -- # x722=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@298 -- # mlx=() 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:44:36.643 Found 0000:20:00.0 (0x8086 - 0x159b) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:44:36.643 Found 0000:20:00.1 (0x8086 - 0x159b) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:44:36.643 Found net devices under 0000:20:00.0: cvl_0_0 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:44:36.643 Found net devices under 0000:20:00.1: cvl_0_1 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:36.643 17:01:31 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:36.644 17:01:31 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:36.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:36.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:44:36.644 00:44:36.644 --- 10.0.0.2 ping statistics --- 00:44:36.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:36.644 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:36.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:36.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:44:36.644 00:44:36.644 --- 10.0.0.1 ping statistics --- 00:44:36.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:36.644 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@422 -- # return 0 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:36.644 17:01:32 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@481 -- # nvmfpid=1925393 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:36.644 17:01:32 chaining -- nvmf/common.sh@482 -- # waitforlisten 1925393 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@831 -- # '[' -z 1925393 ']' 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:36.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:36.644 17:01:32 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.644 [2024-07-24 17:01:32.249056] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:36.644 [2024-07-24 17:01:32.249188] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.644 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:36.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:36.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:36.645 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:36.645 [2024-07-24 17:01:32.473930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:36.645 [2024-07-24 17:01:32.749055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:36.645 [2024-07-24 17:01:32.749107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:36.645 [2024-07-24 17:01:32.749126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:36.645 [2024-07-24 17:01:32.749149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:36.645 [2024-07-24 17:01:32.749166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:36.645 [2024-07-24 17:01:32.749208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@864 -- # return 0 00:44:36.645 17:01:33 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.645 17:01:33 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@69 -- # mktemp 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.chIqUaqoY6 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@69 -- # mktemp 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.xcKimlMYhb 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.645 malloc0 00:44:36.645 true 00:44:36.645 true 00:44:36.645 [2024-07-24 17:01:33.405295] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:44:36.645 crypto0 00:44:36.645 [2024-07-24 17:01:33.413308] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:44:36.645 crypto1 00:44:36.645 [2024-07-24 17:01:33.421470] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:36.645 [2024-07-24 17:01:33.437688] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@85 -- # update_stats 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.645 17:01:33 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:36.645 17:01:33 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:36.903 17:01:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:36.903 17:01:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:36.903 17:01:33 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:36.904 17:01:33 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.chIqUaqoY6 bs=1K count=64 00:44:36.904 64+0 records in 00:44:36.904 64+0 records out 00:44:36.904 65536 bytes (66 kB, 64 KiB) copied, 0.0010489 s, 62.5 MB/s 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.chIqUaqoY6 --ob Nvme0n1 --bs 65536 --count 1 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@25 -- # local config 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:36.904 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:36.904 "subsystems": [ 00:44:36.904 { 00:44:36.904 "subsystem": "bdev", 00:44:36.904 "config": [ 00:44:36.904 { 00:44:36.904 "method": "bdev_nvme_attach_controller", 00:44:36.904 "params": { 00:44:36.904 "trtype": "tcp", 00:44:36.904 "adrfam": "IPv4", 00:44:36.904 "name": "Nvme0", 00:44:36.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.904 "traddr": "10.0.0.2", 00:44:36.904 "trsvcid": "4420" 00:44:36.904 } 00:44:36.904 }, 00:44:36.904 { 00:44:36.904 "method": "bdev_set_options", 00:44:36.904 "params": { 00:44:36.904 "bdev_auto_examine": false 00:44:36.904 } 00:44:36.904 } 00:44:36.904 ] 00:44:36.904 } 00:44:36.904 ] 00:44:36.904 }' 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.chIqUaqoY6 --ob Nvme0n1 --bs 65536 --count 1 00:44:36.904 17:01:33 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:36.904 "subsystems": [ 00:44:36.904 { 00:44:36.904 "subsystem": "bdev", 00:44:36.904 "config": [ 00:44:36.904 { 00:44:36.904 "method": "bdev_nvme_attach_controller", 00:44:36.904 "params": { 00:44:36.904 "trtype": "tcp", 00:44:36.904 "adrfam": "IPv4", 00:44:36.904 "name": "Nvme0", 00:44:36.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.904 "traddr": "10.0.0.2", 00:44:36.904 "trsvcid": "4420" 00:44:36.904 } 00:44:36.904 }, 00:44:36.904 { 00:44:36.904 "method": "bdev_set_options", 00:44:36.904 "params": { 00:44:36.904 "bdev_auto_examine": false 00:44:36.904 } 00:44:36.904 } 00:44:36.904 ] 00:44:36.904 } 00:44:36.904 ] 00:44:36.904 }' 00:44:37.162 [2024-07-24 17:01:33.799003] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:37.162 [2024-07-24 17:01:33.799116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1925709 ] 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:37.162 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.162 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:37.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.163 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:37.163 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:37.163 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:37.163 [2024-07-24 17:01:34.022983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.730 [2024-07-24 17:01:34.291852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.197  Copying: 64/64 [kB] (average 15 MBps) 00:44:40.197 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@96 -- # update_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:36 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:37 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:40.197 17:01:37 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:40.197 17:01:37 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:40.197 17:01:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:40.197 17:01:37 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.xcKimlMYhb --ib Nvme0n1 --bs 65536 --count 1 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@25 -- # local config 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:40.474 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:40.474 "subsystems": [ 00:44:40.474 { 00:44:40.474 "subsystem": "bdev", 00:44:40.474 "config": [ 00:44:40.474 { 00:44:40.474 "method": "bdev_nvme_attach_controller", 00:44:40.474 "params": { 00:44:40.474 "trtype": "tcp", 00:44:40.474 "adrfam": "IPv4", 00:44:40.474 "name": "Nvme0", 00:44:40.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:40.474 "traddr": "10.0.0.2", 00:44:40.474 "trsvcid": "4420" 00:44:40.474 } 00:44:40.474 }, 00:44:40.474 { 00:44:40.474 "method": "bdev_set_options", 00:44:40.474 "params": { 00:44:40.474 "bdev_auto_examine": false 00:44:40.474 } 00:44:40.474 } 00:44:40.474 ] 00:44:40.474 } 00:44:40.474 ] 00:44:40.474 }' 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.xcKimlMYhb --ib Nvme0n1 --bs 65536 --count 1 00:44:40.474 17:01:37 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:40.474 "subsystems": [ 00:44:40.474 { 00:44:40.474 "subsystem": "bdev", 00:44:40.474 "config": [ 00:44:40.474 { 00:44:40.474 "method": "bdev_nvme_attach_controller", 00:44:40.474 "params": { 00:44:40.474 "trtype": "tcp", 00:44:40.474 "adrfam": "IPv4", 00:44:40.474 "name": "Nvme0", 00:44:40.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:40.474 "traddr": "10.0.0.2", 00:44:40.474 "trsvcid": "4420" 00:44:40.474 } 00:44:40.474 }, 00:44:40.474 { 00:44:40.474 "method": "bdev_set_options", 00:44:40.474 "params": { 00:44:40.474 "bdev_auto_examine": false 00:44:40.474 } 00:44:40.474 } 00:44:40.474 ] 00:44:40.474 } 00:44:40.474 ] 00:44:40.474 }' 00:44:40.474 [2024-07-24 17:01:37.229303] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:40.474 [2024-07-24 17:01:37.229417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926268 ] 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:40.734 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.734 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:40.734 [2024-07-24 17:01:37.452702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.993 [2024-07-24 17:01:37.731802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:43.858  Copying: 64/64 [kB] (average 31 MBps) 00:44:43.858 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:43.858 17:01:40 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:43.858 17:01:40 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.chIqUaqoY6 /tmp/tmp.xcKimlMYhb 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@25 -- # local config 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:43.859 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:43.859 "subsystems": [ 00:44:43.859 { 00:44:43.859 "subsystem": "bdev", 00:44:43.859 "config": [ 00:44:43.859 { 00:44:43.859 "method": "bdev_nvme_attach_controller", 00:44:43.859 "params": { 00:44:43.859 "trtype": "tcp", 00:44:43.859 "adrfam": "IPv4", 00:44:43.859 "name": "Nvme0", 00:44:43.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.859 "traddr": "10.0.0.2", 00:44:43.859 "trsvcid": "4420" 00:44:43.859 } 00:44:43.859 }, 00:44:43.859 { 00:44:43.859 "method": "bdev_set_options", 00:44:43.859 "params": { 00:44:43.859 "bdev_auto_examine": false 00:44:43.859 } 00:44:43.859 } 00:44:43.859 ] 00:44:43.859 } 00:44:43.859 ] 00:44:43.859 }' 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:44:43.859 17:01:40 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:43.859 "subsystems": [ 00:44:43.859 { 00:44:43.859 "subsystem": "bdev", 00:44:43.859 "config": [ 00:44:43.859 { 00:44:43.859 "method": "bdev_nvme_attach_controller", 00:44:43.859 "params": { 00:44:43.859 "trtype": "tcp", 00:44:43.859 "adrfam": "IPv4", 00:44:43.859 "name": "Nvme0", 00:44:43.859 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:43.859 "traddr": "10.0.0.2", 00:44:43.859 "trsvcid": "4420" 00:44:43.859 } 00:44:43.859 }, 00:44:43.859 { 00:44:43.859 "method": "bdev_set_options", 00:44:43.859 "params": { 00:44:43.859 "bdev_auto_examine": false 00:44:43.859 } 00:44:43.859 } 00:44:43.859 ] 00:44:43.859 } 00:44:43.859 ] 00:44:43.859 }' 00:44:43.859 [2024-07-24 17:01:40.624865] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:43.859 [2024-07-24 17:01:40.624978] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926824 ] 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:44.117 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:44.117 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:44.118 [2024-07-24 17:01:40.852114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:44.375 [2024-07-24 17:01:41.133361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:46.861  Copying: 64/64 [kB] (average 20 MBps) 00:44:46.861 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@106 -- # update_stats 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:46.861 17:01:43 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.chIqUaqoY6 --ob Nvme0n1 --bs 4096 --count 16 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@25 -- # local config 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:46.861 17:01:43 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:46.861 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:47.120 17:01:43 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:47.120 "subsystems": [ 00:44:47.120 { 00:44:47.120 "subsystem": "bdev", 00:44:47.120 "config": [ 00:44:47.120 { 00:44:47.120 "method": "bdev_nvme_attach_controller", 00:44:47.120 "params": { 00:44:47.120 "trtype": "tcp", 00:44:47.120 "adrfam": "IPv4", 00:44:47.120 "name": "Nvme0", 00:44:47.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:47.120 "traddr": "10.0.0.2", 00:44:47.120 "trsvcid": "4420" 00:44:47.120 } 00:44:47.120 }, 00:44:47.120 { 00:44:47.120 "method": "bdev_set_options", 00:44:47.120 "params": { 00:44:47.120 "bdev_auto_examine": false 00:44:47.120 } 00:44:47.120 } 00:44:47.120 ] 00:44:47.120 } 00:44:47.120 ] 00:44:47.120 }' 00:44:47.120 17:01:43 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.chIqUaqoY6 --ob Nvme0n1 --bs 4096 --count 16 00:44:47.120 17:01:43 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:47.120 "subsystems": [ 00:44:47.120 { 00:44:47.120 "subsystem": "bdev", 00:44:47.120 "config": [ 00:44:47.120 { 00:44:47.120 "method": "bdev_nvme_attach_controller", 00:44:47.120 "params": { 00:44:47.120 "trtype": "tcp", 00:44:47.120 "adrfam": "IPv4", 00:44:47.120 "name": "Nvme0", 00:44:47.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:47.120 "traddr": "10.0.0.2", 00:44:47.120 "trsvcid": "4420" 00:44:47.120 } 00:44:47.120 }, 00:44:47.120 { 00:44:47.120 "method": "bdev_set_options", 00:44:47.120 "params": { 00:44:47.120 "bdev_auto_examine": false 00:44:47.120 } 00:44:47.120 } 00:44:47.120 ] 00:44:47.120 } 00:44:47.120 ] 00:44:47.120 }' 00:44:47.120 [2024-07-24 17:01:43.857254] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:47.120 [2024-07-24 17:01:43.857368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927371 ] 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.379 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:47.379 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:47.380 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:47.380 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:47.380 [2024-07-24 17:01:44.082481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.639 [2024-07-24 17:01:44.374989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:50.115  Copying: 64/64 [kB] (average 10 MBps) 00:44:50.116 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:50.116 17:01:46 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@114 -- # update_stats 00:44:50.116 17:01:46 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:50.375 17:01:46 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:50.375 17:01:46 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.375 17:01:46 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.375 17:01:46 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:50.375 17:01:47 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@117 -- # : 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.xcKimlMYhb --ib Nvme0n1 --bs 4096 --count 16 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@25 -- # local config 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:50.375 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:50.375 "subsystems": [ 00:44:50.375 { 00:44:50.375 "subsystem": "bdev", 00:44:50.375 "config": [ 00:44:50.375 { 00:44:50.375 "method": "bdev_nvme_attach_controller", 00:44:50.375 "params": { 00:44:50.375 "trtype": "tcp", 00:44:50.375 "adrfam": "IPv4", 00:44:50.375 "name": "Nvme0", 00:44:50.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:50.375 "traddr": "10.0.0.2", 00:44:50.375 "trsvcid": "4420" 00:44:50.375 } 00:44:50.375 }, 00:44:50.375 { 00:44:50.375 "method": "bdev_set_options", 00:44:50.375 "params": { 00:44:50.375 "bdev_auto_examine": false 00:44:50.375 } 00:44:50.375 } 00:44:50.375 ] 00:44:50.375 } 00:44:50.375 ] 00:44:50.375 }' 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.xcKimlMYhb --ib Nvme0n1 --bs 4096 --count 16 00:44:50.375 17:01:47 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:50.375 "subsystems": [ 00:44:50.375 { 00:44:50.375 "subsystem": "bdev", 00:44:50.375 "config": [ 00:44:50.375 { 00:44:50.375 "method": "bdev_nvme_attach_controller", 00:44:50.375 "params": { 00:44:50.375 "trtype": "tcp", 00:44:50.375 "adrfam": "IPv4", 00:44:50.375 "name": "Nvme0", 00:44:50.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:50.375 "traddr": "10.0.0.2", 00:44:50.375 "trsvcid": "4420" 00:44:50.375 } 00:44:50.375 }, 00:44:50.375 { 00:44:50.375 "method": "bdev_set_options", 00:44:50.375 "params": { 00:44:50.375 "bdev_auto_examine": false 00:44:50.375 } 00:44:50.375 } 00:44:50.375 ] 00:44:50.375 } 00:44:50.375 ] 00:44:50.375 }' 00:44:50.634 [2024-07-24 17:01:47.307379] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:50.634 [2024-07-24 17:01:47.307495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1927933 ] 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:50.634 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:50.634 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:50.893 [2024-07-24 17:01:47.540214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.151 [2024-07-24 17:01:47.829312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:53.557  Copying: 64/64 [kB] (average 719 kBps) 00:44:53.557 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:53.557 17:01:50 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:53.557 17:01:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:53.557 17:01:50 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:53.557 17:01:50 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:53.557 17:01:50 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:53.557 17:01:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:53.557 17:01:50 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:53.820 17:01:50 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:53.820 17:01:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:53.820 17:01:50 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:53.820 17:01:50 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:53.820 17:01:50 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:53.820 17:01:50 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.chIqUaqoY6 /tmp/tmp.xcKimlMYhb 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.chIqUaqoY6 /tmp/tmp.xcKimlMYhb 00:44:53.820 17:01:50 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:44:53.820 17:01:50 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:44:53.820 17:01:50 chaining -- nvmf/common.sh@117 -- # sync 00:44:53.820 17:01:50 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:44:53.820 17:01:50 chaining -- nvmf/common.sh@120 -- # set +e 00:44:53.820 17:01:50 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:44:53.820 17:01:50 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:44:53.820 rmmod nvme_tcp 00:44:53.820 rmmod nvme_fabrics 00:44:53.821 rmmod nvme_keyring 00:44:53.821 17:01:50 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:44:53.821 17:01:50 chaining -- nvmf/common.sh@124 -- # set -e 00:44:53.821 17:01:50 chaining -- nvmf/common.sh@125 -- # return 0 00:44:53.821 17:01:50 chaining -- nvmf/common.sh@489 -- # '[' -n 1925393 ']' 00:44:53.821 17:01:50 chaining -- nvmf/common.sh@490 -- # killprocess 1925393 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@950 -- # '[' -z 1925393 ']' 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@954 -- # kill -0 1925393 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@955 -- # uname 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1925393 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1925393' 00:44:53.821 killing process with pid 1925393 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@969 -- # kill 1925393 00:44:53.821 17:01:50 chaining -- common/autotest_common.sh@974 -- # wait 1925393 00:44:55.724 17:01:52 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:44:55.724 17:01:52 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:44:55.724 17:01:52 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:44:55.724 17:01:52 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:44:55.724 17:01:52 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:44:55.724 17:01:52 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:55.724 17:01:52 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:55.724 17:01:52 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:57.624 17:01:54 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:44:57.624 17:01:54 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:44:57.624 17:01:54 chaining -- bdev/chaining.sh@132 -- # bperfpid=1929230 00:44:57.624 17:01:54 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:44:57.624 17:01:54 chaining -- bdev/chaining.sh@134 -- # waitforlisten 1929230 00:44:57.624 17:01:54 chaining -- common/autotest_common.sh@831 -- # '[' -z 1929230 ']' 00:44:57.624 17:01:54 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:57.624 17:01:54 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:57.624 17:01:54 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:57.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:57.624 17:01:54 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:57.624 17:01:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:57.884 [2024-07-24 17:01:54.561183] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:44:57.884 [2024-07-24 17:01:54.561308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929230 ] 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:57.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:57.884 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:58.144 [2024-07-24 17:01:54.787017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:58.438 [2024-07-24 17:01:55.072871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:58.696 17:01:55 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:58.696 17:01:55 chaining -- common/autotest_common.sh@864 -- # return 0 00:44:58.696 17:01:55 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:44:58.696 17:01:55 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.696 17:01:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:59.262 malloc0 00:44:59.262 true 00:44:59.262 true 00:44:59.262 [2024-07-24 17:01:56.031791] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:44:59.262 crypto0 00:44:59.262 [2024-07-24 17:01:56.039828] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:44:59.262 crypto1 00:44:59.262 17:01:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:59.262 17:01:56 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:44:59.519 Running I/O for 5 seconds... 00:45:04.797 00:45:04.797 Latency(us) 00:45:04.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:04.797 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:04.797 Verification LBA range: start 0x0 length 0x2000 00:45:04.797 crypto1 : 5.01 11390.94 44.50 0.00 0.00 22412.61 6291.46 16357.79 00:45:04.797 =================================================================================================================== 00:45:04.797 Total : 11390.94 44.50 0.00 0.00 22412.61 6291.46 16357.79 00:45:04.797 0 00:45:04.797 17:02:01 chaining -- bdev/chaining.sh@146 -- # killprocess 1929230 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@950 -- # '[' -z 1929230 ']' 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@954 -- # kill -0 1929230 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@955 -- # uname 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1929230 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1929230' 00:45:04.797 killing process with pid 1929230 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@969 -- # kill 1929230 00:45:04.797 Received shutdown signal, test time was about 5.000000 seconds 00:45:04.797 00:45:04.797 Latency(us) 00:45:04.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:04.797 =================================================================================================================== 00:45:04.797 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:04.797 17:02:01 chaining -- common/autotest_common.sh@974 -- # wait 1929230 00:45:06.170 17:02:03 chaining -- bdev/chaining.sh@152 -- # bperfpid=1930613 00:45:06.170 17:02:03 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:06.170 17:02:03 chaining -- bdev/chaining.sh@154 -- # waitforlisten 1930613 00:45:06.170 17:02:03 chaining -- common/autotest_common.sh@831 -- # '[' -z 1930613 ']' 00:45:06.170 17:02:03 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:06.428 17:02:03 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:06.428 17:02:03 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:06.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:06.428 17:02:03 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:06.428 17:02:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:06.428 [2024-07-24 17:02:03.130614] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:45:06.428 [2024-07-24 17:02:03.130737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930613 ] 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:06.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.428 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:06.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.429 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:06.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.429 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:06.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.429 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:06.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.429 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:06.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.429 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:06.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:06.429 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:06.695 [2024-07-24 17:02:03.354951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:06.951 [2024-07-24 17:02:03.637375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:07.207 17:02:03 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:07.207 17:02:03 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:07.207 17:02:03 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:45:07.207 17:02:03 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:07.207 17:02:03 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:07.772 malloc0 00:45:07.772 true 00:45:07.772 true 00:45:07.772 [2024-07-24 17:02:04.525876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:45:07.772 [2024-07-24 17:02:04.525943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:07.772 [2024-07-24 17:02:04.525969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:45:07.772 [2024-07-24 17:02:04.525987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:07.772 [2024-07-24 17:02:04.527728] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:07.772 [2024-07-24 17:02:04.527766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:45:07.772 pt0 00:45:07.772 [2024-07-24 17:02:04.533920] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:07.772 crypto0 00:45:07.772 [2024-07-24 17:02:04.541924] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:45:07.772 crypto1 00:45:07.772 17:02:04 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:07.772 17:02:04 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:45:08.029 Running I/O for 5 seconds... 00:45:13.288 00:45:13.288 Latency(us) 00:45:13.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:13.289 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:13.289 Verification LBA range: start 0x0 length 0x2000 00:45:13.289 crypto1 : 5.01 8794.43 34.35 0.00 0.00 29016.56 2634.55 18035.51 00:45:13.289 =================================================================================================================== 00:45:13.289 Total : 8794.43 34.35 0.00 0.00 29016.56 2634.55 18035.51 00:45:13.289 0 00:45:13.289 17:02:09 chaining -- bdev/chaining.sh@167 -- # killprocess 1930613 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@950 -- # '[' -z 1930613 ']' 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@954 -- # kill -0 1930613 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@955 -- # uname 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1930613 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1930613' 00:45:13.289 killing process with pid 1930613 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@969 -- # kill 1930613 00:45:13.289 Received shutdown signal, test time was about 5.000000 seconds 00:45:13.289 00:45:13.289 Latency(us) 00:45:13.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:13.289 =================================================================================================================== 00:45:13.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:13.289 17:02:09 chaining -- common/autotest_common.sh@974 -- # wait 1930613 00:45:15.189 17:02:11 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:45:15.189 17:02:11 chaining -- bdev/chaining.sh@170 -- # killprocess 1930613 00:45:15.189 17:02:11 chaining -- common/autotest_common.sh@950 -- # '[' -z 1930613 ']' 00:45:15.189 17:02:11 chaining -- common/autotest_common.sh@954 -- # kill -0 1930613 00:45:15.189 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1930613) - No such process 00:45:15.189 17:02:11 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 1930613 is not found' 00:45:15.189 Process with pid 1930613 is not found 00:45:15.189 17:02:11 chaining -- bdev/chaining.sh@171 -- # wait 1930613 00:45:15.189 17:02:11 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:45:15.189 17:02:11 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:15.189 17:02:11 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:15.189 17:02:11 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:15.190 17:02:11 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:15.190 17:02:11 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:45:15.190 17:02:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@296 -- # e810=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@297 -- # x722=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@298 -- # mlx=() 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:45:15.190 Found 0000:20:00.0 (0x8086 - 0x159b) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:45:15.190 Found 0000:20:00.1 (0x8086 - 0x159b) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:45:15.190 Found net devices under 0000:20:00.0: cvl_0_0 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:45:15.190 Found net devices under 0000:20:00.1: cvl_0_1 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:15.190 17:02:11 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:15.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:15.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:45:15.191 00:45:15.191 --- 10.0.0.2 ping statistics --- 00:45:15.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:15.191 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:15.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:15.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:45:15.191 00:45:15.191 --- 10.0.0.1 ping statistics --- 00:45:15.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:15.191 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@422 -- # return 0 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:15.191 17:02:11 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@481 -- # nvmfpid=1931963 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@482 -- # waitforlisten 1931963 00:45:15.191 17:02:11 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@831 -- # '[' -z 1931963 ']' 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:15.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:15.191 17:02:11 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:15.191 [2024-07-24 17:02:12.046707] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:45:15.191 [2024-07-24 17:02:12.046827] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:15.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.448 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:15.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.448 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:15.448 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.448 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:15.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:15.449 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:15.449 [2024-07-24 17:02:12.271233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:15.706 [2024-07-24 17:02:12.536229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:15.706 [2024-07-24 17:02:12.536284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:15.706 [2024-07-24 17:02:12.536304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:15.706 [2024-07-24 17:02:12.536319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:15.706 [2024-07-24 17:02:12.536342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:15.706 [2024-07-24 17:02:12.536392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:16.271 17:02:13 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:16.271 17:02:13 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:16.271 17:02:13 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:16.271 17:02:13 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:16.271 17:02:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:16.271 17:02:13 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:16.271 17:02:13 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:45:16.271 17:02:13 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:16.271 17:02:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:16.271 malloc0 00:45:16.271 [2024-07-24 17:02:13.120809] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:16.528 [2024-07-24 17:02:13.137054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:16.528 17:02:13 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:45:16.528 17:02:13 chaining -- bdev/chaining.sh@189 -- # bperfpid=1932246 00:45:16.528 17:02:13 chaining -- bdev/chaining.sh@191 -- # waitforlisten 1932246 /var/tmp/bperf.sock 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@831 -- # '[' -z 1932246 ']' 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:16.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:16.528 17:02:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:16.528 17:02:13 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:16.528 [2024-07-24 17:02:13.354192] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:45:16.528 [2024-07-24 17:02:13.354456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1932246 ] 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:16.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:16.785 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:17.042 [2024-07-24 17:02:13.719839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:17.299 [2024-07-24 17:02:14.018005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:17.299 17:02:14 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:17.299 17:02:14 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:17.299 17:02:14 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:45:17.299 17:02:14 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:45:18.244 [2024-07-24 17:02:14.912364] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:18.244 nvme0n1 00:45:18.244 true 00:45:18.244 crypto0 00:45:18.244 17:02:14 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:18.244 Running I/O for 5 seconds... 00:45:23.521 00:45:23.521 Latency(us) 00:45:23.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:23.521 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:23.521 Verification LBA range: start 0x0 length 0x2000 00:45:23.521 crypto0 : 5.02 8219.16 32.11 0.00 0.00 31039.07 4377.80 28311.55 00:45:23.521 =================================================================================================================== 00:45:23.521 Total : 8219.16 32.11 0.00 0.00 31039.07 4377.80 28311.55 00:45:23.521 0 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@205 -- # sequence=82548 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:23.521 17:02:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@206 -- # encrypt=41274 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:23.778 17:02:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@207 -- # decrypt=41274 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:45:24.035 17:02:20 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:24.293 17:02:21 chaining -- bdev/chaining.sh@208 -- # crc32c=82548 00:45:24.293 17:02:21 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:45:24.293 17:02:21 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:45:24.293 17:02:21 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:45:24.293 17:02:21 chaining -- bdev/chaining.sh@214 -- # killprocess 1932246 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@950 -- # '[' -z 1932246 ']' 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@954 -- # kill -0 1932246 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@955 -- # uname 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1932246 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1932246' 00:45:24.293 killing process with pid 1932246 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@969 -- # kill 1932246 00:45:24.293 Received shutdown signal, test time was about 5.000000 seconds 00:45:24.293 00:45:24.293 Latency(us) 00:45:24.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:24.293 =================================================================================================================== 00:45:24.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:24.293 17:02:21 chaining -- common/autotest_common.sh@974 -- # wait 1932246 00:45:26.191 17:02:22 chaining -- bdev/chaining.sh@219 -- # bperfpid=1933833 00:45:26.191 17:02:22 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:45:26.191 17:02:22 chaining -- bdev/chaining.sh@221 -- # waitforlisten 1933833 /var/tmp/bperf.sock 00:45:26.191 17:02:22 chaining -- common/autotest_common.sh@831 -- # '[' -z 1933833 ']' 00:45:26.191 17:02:22 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:26.191 17:02:22 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:26.191 17:02:22 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:26.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:26.191 17:02:22 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:26.191 17:02:22 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:26.191 [2024-07-24 17:02:22.892197] Starting SPDK v24.09-pre git sha1 8ee2672c4 / DPDK 24.03.0 initialization... 00:45:26.191 [2024-07-24 17:02:22.892328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933833 ] 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.191 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:26.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.192 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:26.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.192 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:26.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.192 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:26.192 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:26.192 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:26.449 [2024-07-24 17:02:23.117057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:26.706 [2024-07-24 17:02:23.407389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:26.963 17:02:23 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:26.963 17:02:23 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:26.963 17:02:23 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:45:26.963 17:02:23 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:45:27.899 [2024-07-24 17:02:24.568735] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:27.899 nvme0n1 00:45:27.899 true 00:45:27.899 crypto0 00:45:27.899 17:02:24 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:27.899 Running I/O for 5 seconds... 00:45:33.161 00:45:33.161 Latency(us) 00:45:33.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:33.161 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:45:33.161 Verification LBA range: start 0x0 length 0x200 00:45:33.161 crypto0 : 5.01 1676.68 104.79 0.00 0.00 18705.14 671.74 21181.24 00:45:33.161 =================================================================================================================== 00:45:33.161 Total : 1676.68 104.79 0.00 0.00 18705.14 671.74 21181.24 00:45:33.161 0 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@233 -- # sequence=16786 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:33.161 17:02:29 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@234 -- # encrypt=8393 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:33.420 17:02:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@235 -- # decrypt=8393 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:33.678 17:02:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:45:33.937 17:02:30 chaining -- bdev/chaining.sh@236 -- # crc32c=16786 00:45:33.937 17:02:30 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:45:33.937 17:02:30 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:45:33.937 17:02:30 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:45:33.937 17:02:30 chaining -- bdev/chaining.sh@242 -- # killprocess 1933833 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@950 -- # '[' -z 1933833 ']' 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@954 -- # kill -0 1933833 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@955 -- # uname 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1933833 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1933833' 00:45:33.937 killing process with pid 1933833 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@969 -- # kill 1933833 00:45:33.937 Received shutdown signal, test time was about 5.000000 seconds 00:45:33.937 00:45:33.937 Latency(us) 00:45:33.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:33.937 =================================================================================================================== 00:45:33.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:33.937 17:02:30 chaining -- common/autotest_common.sh@974 -- # wait 1933833 00:45:35.841 17:02:32 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@117 -- # sync 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@120 -- # set +e 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:35.841 rmmod nvme_tcp 00:45:35.841 rmmod nvme_fabrics 00:45:35.841 rmmod nvme_keyring 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@124 -- # set -e 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@125 -- # return 0 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@489 -- # '[' -n 1931963 ']' 00:45:35.841 17:02:32 chaining -- nvmf/common.sh@490 -- # killprocess 1931963 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@950 -- # '[' -z 1931963 ']' 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@954 -- # kill -0 1931963 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@955 -- # uname 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1931963 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1931963' 00:45:35.841 killing process with pid 1931963 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@969 -- # kill 1931963 00:45:35.841 17:02:32 chaining -- common/autotest_common.sh@974 -- # wait 1931963 00:45:37.744 17:02:34 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:37.744 17:02:34 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:37.744 17:02:34 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:37.744 17:02:34 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:37.744 17:02:34 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:37.744 17:02:34 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:37.744 17:02:34 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:37.744 17:02:34 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:39.702 17:02:36 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:39.702 17:02:36 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:45:39.702 00:45:39.702 real 1m13.088s 00:45:39.702 user 1m36.736s 00:45:39.702 sys 0m14.747s 00:45:39.702 17:02:36 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:39.702 17:02:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:39.702 ************************************ 00:45:39.702 END TEST chaining 00:45:39.702 ************************************ 00:45:39.702 17:02:36 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:45:39.702 17:02:36 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:45:39.702 17:02:36 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:45:39.702 17:02:36 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:45:39.702 17:02:36 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:45:39.702 17:02:36 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:45:39.702 17:02:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:39.702 17:02:36 -- common/autotest_common.sh@10 -- # set +x 00:45:39.702 17:02:36 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:45:39.702 17:02:36 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:39.702 17:02:36 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:39.702 17:02:36 -- common/autotest_common.sh@10 -- # set +x 00:45:46.270 INFO: APP EXITING 00:45:46.270 INFO: killing all VMs 00:45:46.270 INFO: killing vhost app 00:45:46.270 INFO: EXIT DONE 00:45:49.556 Waiting for block devices as requested 00:45:49.556 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:49.556 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:49.556 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:49.556 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:49.815 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:49.815 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:49.815 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:50.073 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:50.073 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:45:50.073 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:45:50.332 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:45:50.332 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:45:50.332 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:45:50.591 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:45:50.591 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:45:50.591 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:45:50.849 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:45:56.121 Cleaning 00:45:56.121 Removing: /var/run/dpdk/spdk0/config 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:56.121 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:56.121 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:56.121 Removing: /dev/shm/nvmf_trace.0 00:45:56.121 Removing: /dev/shm/spdk_tgt_trace.pid1511326 00:45:56.121 Removing: /var/run/dpdk/spdk0 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1503709 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1508017 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1511326 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1512561 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1514092 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1514971 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1516597 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1516872 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1517781 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1522956 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1525572 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1526297 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1527220 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1528274 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1529375 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1529667 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1530116 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1530527 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1531634 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1535383 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1535843 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1536205 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1537263 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1537597 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1538142 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1538689 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1539235 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1539801 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1540468 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1541128 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1541674 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1542235 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1542789 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1543335 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1543899 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1544538 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1545186 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1545774 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1546321 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1546867 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1547424 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1547976 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1548564 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1549200 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1549795 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1550728 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1551610 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1552179 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1553383 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1553975 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1554741 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1555342 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1556099 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1556685 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1557603 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1558744 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1560091 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1561067 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1567454 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1570081 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1572921 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1574728 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1577024 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1578096 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1578241 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1578488 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1583795 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1584801 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1586882 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1587617 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1599604 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1601933 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1603351 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1609268 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1611525 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1612947 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1619567 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1622535 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1623953 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1636505 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1639524 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1641198 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1654180 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1657073 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1658751 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1671390 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1675707 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1677218 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1691863 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1695352 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1697036 00:45:56.121 Removing: /var/run/dpdk/spdk_pid1711382 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1714662 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1716345 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1731532 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1736287 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1737979 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1739665 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1744111 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1750843 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1754844 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1760601 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1765293 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1772034 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1775779 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1784006 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1787398 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1795455 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1798671 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1806750 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1809716 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1815450 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1816315 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1817192 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1818616 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1819539 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1820678 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1821954 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1822746 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1825385 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1827984 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1830603 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1832811 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1841527 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1847391 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1850205 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1852604 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1855002 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1857020 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1865421 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1871185 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1872731 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1873855 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1877702 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1881072 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1883916 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1885871 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1887901 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1889218 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1889495 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1889809 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1890740 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1891220 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1893264 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1895674 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1897943 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1899272 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1900610 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1901223 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1901438 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1901716 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1903096 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1904687 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1905752 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1909959 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1912745 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1915643 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1917608 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1919732 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1920843 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1921102 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1925709 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1926268 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1926824 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1927371 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1927933 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1929230 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1930613 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1932246 00:45:56.122 Removing: /var/run/dpdk/spdk_pid1933833 00:45:56.122 Clean 00:45:56.380 17:02:53 -- common/autotest_common.sh@1451 -- # return 0 00:45:56.380 17:02:53 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:45:56.380 17:02:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.380 17:02:53 -- common/autotest_common.sh@10 -- # set +x 00:45:56.380 17:02:53 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:45:56.380 17:02:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:56.380 17:02:53 -- common/autotest_common.sh@10 -- # set +x 00:45:56.380 17:02:53 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:45:56.380 17:02:53 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:45:56.380 17:02:53 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:45:56.380 17:02:53 -- spdk/autotest.sh@395 -- # hash lcov 00:45:56.380 17:02:53 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:45:56.380 17:02:53 -- spdk/autotest.sh@397 -- # hostname 00:45:56.380 17:02:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:45:56.640 geninfo: WARNING: invalid characters removed from testname! 00:46:23.215 17:03:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:26.506 17:03:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:29.046 17:03:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:30.953 17:03:27 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:33.494 17:03:30 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:36.033 17:03:32 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:38.572 17:03:35 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:38.572 17:03:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:46:38.572 17:03:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:38.572 17:03:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:38.572 17:03:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:38.572 17:03:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:38.572 17:03:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:38.572 17:03:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:38.572 17:03:35 -- paths/export.sh@5 -- $ export PATH 00:46:38.572 17:03:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:38.572 17:03:35 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:46:38.572 17:03:35 -- common/autobuild_common.sh@447 -- $ date +%s 00:46:38.572 17:03:35 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721833415.XXXXXX 00:46:38.572 17:03:35 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721833415.3uwUzW 00:46:38.572 17:03:35 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:46:38.572 17:03:35 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:46:38.572 17:03:35 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:46:38.572 17:03:35 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:46:38.572 17:03:35 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:46:38.572 17:03:35 -- common/autobuild_common.sh@463 -- $ get_config_params 00:46:38.572 17:03:35 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:46:38.572 17:03:35 -- common/autotest_common.sh@10 -- $ set +x 00:46:38.572 17:03:35 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:46:38.572 17:03:35 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:46:38.572 17:03:35 -- pm/common@17 -- $ local monitor 00:46:38.572 17:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:38.572 17:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:38.572 17:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:38.572 17:03:35 -- pm/common@21 -- $ date +%s 00:46:38.572 17:03:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:38.572 17:03:35 -- pm/common@21 -- $ date +%s 00:46:38.572 17:03:35 -- pm/common@25 -- $ sleep 1 00:46:38.572 17:03:35 -- pm/common@21 -- $ date +%s 00:46:38.572 17:03:35 -- pm/common@21 -- $ date +%s 00:46:38.572 17:03:35 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721833415 00:46:38.572 17:03:35 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721833415 00:46:38.572 17:03:35 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721833415 00:46:38.572 17:03:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721833415 00:46:38.572 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721833415_collect-vmstat.pm.log 00:46:38.572 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721833415_collect-cpu-load.pm.log 00:46:38.573 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721833415_collect-cpu-temp.pm.log 00:46:38.573 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721833415_collect-bmc-pm.bmc.pm.log 00:46:39.512 17:03:36 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:46:39.512 17:03:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:46:39.512 17:03:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:39.512 17:03:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:46:39.512 17:03:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:46:39.512 17:03:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:46:39.512 17:03:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:46:39.512 17:03:36 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:39.512 17:03:36 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:39.512 17:03:36 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:46:39.512 17:03:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:46:39.512 17:03:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:39.512 17:03:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:39.512 17:03:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:39.512 17:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:39.512 17:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:46:39.512 17:03:36 -- pm/common@44 -- $ pid=1948995 00:46:39.512 17:03:36 -- pm/common@50 -- $ kill -TERM 1948995 00:46:39.512 17:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:39.512 17:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:46:39.512 17:03:36 -- pm/common@44 -- $ pid=1948997 00:46:39.512 17:03:36 -- pm/common@50 -- $ kill -TERM 1948997 00:46:39.512 17:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:39.512 17:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:46:39.512 17:03:36 -- pm/common@44 -- $ pid=1948998 00:46:39.512 17:03:36 -- pm/common@50 -- $ kill -TERM 1948998 00:46:39.512 17:03:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:39.512 17:03:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:46:39.512 17:03:36 -- pm/common@44 -- $ pid=1949021 00:46:39.512 17:03:36 -- pm/common@50 -- $ sudo -E kill -TERM 1949021 00:46:39.512 + [[ -n 1373814 ]] 00:46:39.512 + sudo kill 1373814 00:46:39.522 [Pipeline] } 00:46:39.539 [Pipeline] // stage 00:46:39.544 [Pipeline] } 00:46:39.561 [Pipeline] // timeout 00:46:39.568 [Pipeline] } 00:46:39.585 [Pipeline] // catchError 00:46:39.590 [Pipeline] } 00:46:39.613 [Pipeline] // wrap 00:46:39.622 [Pipeline] } 00:46:39.639 [Pipeline] // catchError 00:46:39.650 [Pipeline] stage 00:46:39.652 [Pipeline] { (Epilogue) 00:46:39.668 [Pipeline] catchError 00:46:39.670 [Pipeline] { 00:46:39.684 [Pipeline] echo 00:46:39.686 Cleanup processes 00:46:39.692 [Pipeline] sh 00:46:39.989 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:39.989 1949114 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:46:39.989 1949444 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:40.054 [Pipeline] sh 00:46:40.339 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:40.339 ++ grep -v 'sudo pgrep' 00:46:40.339 ++ awk '{print $1}' 00:46:40.339 + sudo kill -9 1949114 00:46:40.352 [Pipeline] sh 00:46:40.636 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:40.636 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:46:50.617 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:46:55.909 [Pipeline] sh 00:46:56.198 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:46:56.198 Artifacts sizes are good 00:46:56.216 [Pipeline] archiveArtifacts 00:46:56.225 Archiving artifacts 00:46:56.402 [Pipeline] sh 00:46:56.706 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:46:56.719 [Pipeline] cleanWs 00:46:56.727 [WS-CLEANUP] Deleting project workspace... 00:46:56.727 [WS-CLEANUP] Deferred wipeout is used... 00:46:56.732 [WS-CLEANUP] done 00:46:56.734 [Pipeline] } 00:46:56.749 [Pipeline] // catchError 00:46:56.759 [Pipeline] sh 00:46:57.034 + logger -p user.info -t JENKINS-CI 00:46:57.045 [Pipeline] } 00:46:57.065 [Pipeline] // stage 00:46:57.073 [Pipeline] } 00:46:57.092 [Pipeline] // node 00:46:57.099 [Pipeline] End of Pipeline 00:46:57.132 Finished: SUCCESS